In 2017 my Website was migrated to
the clouds and reduced in size.
Hence some links below are broken.
One thing to try if a “www” link is broken is to substitute “faculty” for “www”
For example a broken link
http://www.trinity.edu/rjensen/Pictures.htm
can be changed to corrected link
http://faculty.trinity.edu/rjensen/Pictures.htm
However in some cases files had to be removed to reduce the size of my Website
Contact me at rjensen@trinity.edu if
you really need to file that is missing
Bob Jensen's Threads on
Assessment
Bob
Jensen at Trinity
University
George Carlin - Who Really Controls America ---
Click Here
"More kids pass tests if we simplify the tests --- Why education will never be
fixed."
The Downfall of Lecturing
Measuring Teacher Effectiveness
Altmetrics of Total-Impact
Coaches Graham and Gazowski
Grade Inflation, Teaching Evaluations and RateMyProfessor
Academic Whores: School Systems into Lowering Standards
for Achievement Tests and Graduation
Performance Evaluation and Vegetables
Rubrics in Academia: Assessing, Without
Tests
How to Mislead With Statistics of Merit Scholars:
"Mom, Please Get Me Out of South Dakota!"
The Real Reason Organizations Resist Analytics
The New GMAT
Head Start Programs
Assessment by Ranking May Be a Bad Idea
Assessment by Grades May Be a Bad Idea
The Future: Badges of Competency-Based
Learning Performance
Concept Knowledge, Critical Thinking, Competency Testing, and Assessment of Deep
Understanding
Tips on Preparing Multiple Choice Examinations
Onsite Versus Online Differences for Faculty
Online Versus Onsite for
Students.
Onsite Versus Online Education
(including controls for online examinations and assignments)
Student Engagement
Students Reviewing Each Others' Projects
Online Education Effectiveness and Testing
What Works in Education?
Predictors of Success
Minimum Grades as a School Policy
Team Grading
Too Good to Grade:
How can these students get into doctoral programs and law school if their
prestigious universities will not disclose grades and class rankings? Why
grade at all in this case?
Software for faculty and departmental
performance evaluation and management
K-12 School and College Assessment and College Admission Testing
Civil Rights Groups That Favor
Standardized Testing
Computer-Based Assessment
Computer Grading of Essays
Outsourcing the Grading of Papers
Assessment in General (including the
debate over whether academic research itself should be assessed)
Competency-Based Assessment
Assessment
Issues: Measurement and No-Significant-Differences
Dangers of Self Assessment
The Criterion Problem
Success Stories in Education Technology
Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"
Grade Inflation Versus Teaching Evaluations
Student Evaluations and Learning Styles
Assessment Takes Center Stage in Online Learning: The
Saga of Western Governors University
Measures of Quality in Internet-Based Distance
Learning
Number Watch: How to Lie With Statistics
Drop Out Problems
On
the Dark Side
Accreditation Issues
Software
for Online Examinations and Quizzes
Onsite Versus Online Education
(including controls for online examinations and assignments)
The term "electroThenic
portfolio," or "ePortfolio," is on everyone's lips. What
does this mean?
Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"
Grade Inflation Versus Course Evaluations
Work Experience Substitutes for College Credits
Certification (Licensing) Examinations
Should attendance guarantee passing?
Peer Review Controversies in Academic Journals
Real Versus Phony Book Reviews
Research Questions About the Corporate Ratings
Game
Cause Versus Correlation
Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."
Edutopia: Assessment (a broader look at education assessment) ---
http://www.edutopia.org/assessment
Look beyond high-stakes testing to learn about
different ways of assessing the full range of student ability -- social,
emotional, and academic achievement.
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Mathematics Assessment: A Video Library ---
http://www.learner.org/resources/series31.html
November 1, 2012 Respondus message from Richard Campbell
Is the student taking your class the same one who is taking your exams??
Keep an eye on
www.respondus.com
Respondus Monitor - online exams proctor ---
http://youtu.be/lGyc_HBchOw
Software for online examinations and quizzes ---
http://faculty.trinity.edu/rjensen/Assess.htm#Examinations
Test Drive Running a University ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#TestDrive
Are student usages of FaceBook correlated with lower grades?
Concerns About Social Networking, Blogging, and Twittering in
Education ---
http://faculty.trinity.edu/rjensen/ListservRoles.htm
Bob Jensen's threads on Cognitive Processes and Artificial Intelligence
are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#CognitiveProcesses
Degrees Versus Piecemeal Distance (Online)
Education
Bob Jensen's threads on memory and metacognition are at
http://faculty.trinity.edu/rjensen/265wp.htm
Full Disclosure to Consumers of Higher Education (including assessment
of colleges and the Spellings Commission Report) ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
Also see
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Bok
Publish Exams Online ---
http://www.examprofessor.com/main/index.cfm
Controversies in Higher Education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Bob Jensen's threads on cheating and plagiarism ---
http://faculty.trinity.edu/rjensen/plagiarism.htm
Effort Reporting Technology for Higher Education ---
http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf
Some Thoughts on Competency-Based Training
and Education ---
http://faculty.trinity.edu/rjensen/competency.htm
You can download (for free) hours of
MP3 audio and the PowerPoint presentation slides from several of the best
education technology workshops that I ever organized. --- http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm
Center for Research on Learning and Teaching ---
http://www.engin.umich.edu/teaching/crltengin/researchscholarship/index.html
Asynchronous Learning Advantages and
Disadvantages ---
http://faculty.trinity.edu/rjensen/255wp.htm
Dark Sides of Education Technologies ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
For threaded audio and email
messages from early pioneers in distance education, go http://faculty.trinity.edu/rjensen/ideasmes.htm
Full Disclosure to Consumers of Higher Education at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
American Council on Education - GED Testing --- http://www.acenet.edu/Content/NavigationMenu/ged/index.htm
From PhD Comics: Helpers for Filling Out Teaching Evaluations ---
http://www.phdcomics.com/comics.php?f=847
As David Bartholomae observes, “We make a huge
mistake if we don’t try to articulate more publicly what it is we value in
intellectual work. We do this routinely for our students — so it should not be
difficult to find the language we need to speak to parents and legislators.” If
we do not try to find that public language but argue instead that we are not
accountable to those parents and legislators, we will only confirm what our
cynical detractors say about us, that our real aim is to keep the secrets of our
intellectual club to ourselves. By asking us to spell out those secrets and
measuring our success in opening them to all, outcomes assessment helps make
democratic education a reality.
Gerald Graff, "Assessment Changes
Everything," Inside Higher Ed, February 21, 2008 ---
http://www.insidehighered.com/views/2008/02/21/graff
Gerald Graff is professor of English at the University of Illinois at Chicago
and president of the Modern Language Association. This essay is adapted from a
paper he delivered in December at the MLA annual meeting, a version of which
appears on the MLA’s Web site and is reproduced here with the association’s
permission. Among Graff’s books are Professing Literature, Beyond the
Culture Wars and Clueless in Academe: How School Obscures the Life of the Mind.
Would-be lawyers in Wisconsin who have challenged
the state’s policy of allowing graduates of state law schools to practice law
without passing the state’s bar exam will have their day in court after all, the
Associated Press reported. A federal appeals court has reinstated a lawsuit
challenging the practice, which apparently is unique in the United States.
Katherine Mangan, "Appeals Court Reinstates Lawsuit Over Wisconsin's Bar-Exam
Exemption," Chronicle of Higher Education, January 29, 2008 ---
Click Here
Forwarded by John Stancil
Seems that a prof allowed an 8 ½ x 11 sheet of paper for the note card during
a closed-book examination.
One student says “Let me get this straight. I can use anything I put on the
card?”
Prof say, “Yes.”
The day of the test, the student brought a blank sheet of paper, put it on
the floor and had a grad student stand on the paper.
"How Do People Learn," Sloan-C Review, February 2004 ---
http://www.aln.org/publications/view/v3n2/coverv3n2.htm
Like some of the
other well known cognitive and affective taxonomies, the Kolb figure
illustrates a range of interrelated learning activities and styles beneficial
to novices and experts. Designed to emphasize reflection on learners’
experiences, and progressive conceptualization and active experimentation,
this kind of environment is congruent with the aim of lifelong learning. Randy
Garrison points out that:
From a content
perspective, the key is not to inundate students with information. The first
responsibility of the teacher or content expert is to identify the central
idea and have students reflect upon and share their conceptions. Students
need to be hooked on a big idea if learners are to be motivated to be
reflective and self-directed in constructing meaning. Inundating learners
with information is discouraging and is not consistent with higher order
learning . . . Inappropriate assessment and excessive information will
seriously undermine reflection and the effectiveness of asynchronous
learning.
Reflection on a big
question is amplified when it enters collaborative inquiry, as multiple styles
and approaches interact to respond to the challenge and create solutions. In
How People Learn: Brain, Mind, Experience, and School, John Bransford and
colleagues describe a legacy cycle for collaborative inquiry, depicted in a
figure by Vanderbilt University researchers (see image, lower left).
Continued in the article
December 12, 2003 message from Tracey Sutherland [return@aaahq.org]
THE EDUCATIONAL COMPETENCY ASSESSMENT (ECA) WEB
SITE IS LIVE! http://www.aicpa-eca.org
The AICPA provides this resource to help educators
integrate the skills-based competencies needed by entry-level accounting
professionals. These competencies, defined within the AICPA Core Competency
Framework Project, have been derived from academic and professional competency
models and have been widely endorsed within the academic community. Created by
educators for educators, the evaluation and educational strategies resources
on this site are offered for your use and adaptation.
The ECA site contains a LIBRARY that, in addition to
the Core Competency Database and Education Strategies, provides information
and guidance on Evaluating Competency Coverage and Assessing Student
Performance.
To assist you as you assess student performance and
evaluate competency coverage in your courses and programs, the ECA ORGANIZERS
guide you through the process of gathering, compiling and analyzing evidence
and data so that you may document your activities and progress in addressing
the AICPA Core Competencies.
The ECA site can be accessed through the Educator's
page of aicpa.org, or at the URL listed above.
The Downfall of Lecturing
Bob Jensen's threads on metacognitive
learning ---
http://faculty.trinity.edu/rjensen/265wp.htm
Micro Lectures and Student-Centered Learning
---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#MicroLectures
Center for Research on Learning and Teaching ---
http://www.engin.umich.edu/teaching/crltengin/researchscholarship/index.html
Great Lectures May Be Learning Losers
"Appearances can be deceiving: instructor fluency increases perceptions of
learning without increasing actual learning," by Shana K. Carpenter, Miko M.
Wilford, Nate Kornell, Kellie M. Mullaney, Springer.com, May 2013 ---
http://link.springer.com/article/10.3758%2Fs13423-013-0442-z
Abstract
The present study explored the effects of lecture fluency on students’
metacognitive awareness and regulation. Participants watched one of two
short videos of an instructor explaining a scientific concept. In the
fluent video, the instructor stood upright,
maintained eye contact, and spoke fluidly without notes. In the
disfluent video, the instructor slumped, looked
away, and spoke haltingly with notes. After watching the video, participants
in Experiment 1
were asked to predict how much of the content they would later be able to
recall, and participants in Experiment
2 were given a
text-based script of the video to study. Perceived learning was
significantly higher for the fluent instructor than for the disfluent
instructor (Experiment 1),
although study time was not significantly affected by lecture fluency
(Experiment 2). In
both experiments, the fluent instructor was rated significantly higher than
the disfluent instructor on traditional instructor evaluation questions,
such as preparedness and effectiveness. However, in both experiments,
lecture fluency did not significantly affect the amount of information
learned. Thus, students’ perceptions of their own learning and an
instructor’s effectiveness appear to be based on lecture fluency and not on
actual learning.
Downfall of Lecturing ---
http://faculty.trinity.edu/rjensen/Assess.htm#DownfallOfLecturing
Two Ongoing Papers by Bob
Jensen
Socratic Method Thread on the AECM
September 25, 2010 message from super accounting teacher Joe Hoyle
-----Original Message----- From: Hoyle, Joe [mailto:jhoyle@richmond.edu]
Sent: Saturday, September 25, 2010 8:42 AM
To: Jensen, Robert Subject: RE: Question for Joe Hoyle: The Quickest 2011
CPA Exam Breakdown You'll Ever Read
Hi Bob,
Hope this finds you well. I just got through giving
a bunch of tests last week (Intermediate Accounting II and Introduction to
Financial Accounting) and, as always, some learned it all and some learned a
lot less. I am using my own new Financial Accounting textbook this semester.
As you may know, the book is written in an entirely Socratic Method
(question and answer style). I find that approach stimulates student
curiosity much better than the traditional textbook which uses what I call a
sermon or monologue style. The Socratic Method has been around for 2,500
years -- isn't it strange that it has been ignored as a possible textbook
model? I'm not a big fan of college education presently (that is college
education and not just accounting education). There are three major
components to education: professors, students, and the textbook (or other
course material). It is hard to change the professors and the students. I
think if we want to create a true evolution in college education in a hurry
(my goal), the way to do that is produce truly better college textbooks. I
wish more college accounting professors would think seriously about how
textbooks could be improved. At the AAA meeting in San Francisco in August,
I compared a 1925 intermediate accounting textbook to a 2010 intermediate
accounting textbook and there was a lot less difference than you might have
expected. Textbooks have simply failed to evolve very much (okay, they are
now in color). It is my belief that textbooks were created under a
"conveyance of information" model. An educated person writes a textbook to
convey tons of information to an uneducated person. In the age of Google,
Yahoo, Facebook, YouTube, and Wikipedia, I think the need to convey
information is no longer so urgent. I think we need to switch to a "thinking
about information" model. And, if that is the goal, the Socratic Method is
perfect. You can start off with a question like "Company X reports inventory
at $500,000. What does that mean? Is it the cost or is the retail value?
And, if it is one, why is not the other?" Accounting offers thousands of
such delightful questions.
But, I digress -- you asked about CPAreviewforFREE.
We just finished our 117th week and it has been so much fun. We had a person
write in this week (on our Facebook page) to tell us that she had made three
99s and an 89. Over the summer, we averaged about 300,000 page views per
week. That is page views and not hits but that is still a lot of people
answering a lot of questions.
We are currently writing new questions for the new
exam starting in 2011 including task-based simulations, IFRS, and written
communications questions for BEC. I personally think the exam will change
less than people expect. Currently, roughly 50 percent of the people who
take a part pass that part. I would expect that in January under the new CPA
exam format, roughly 50 percent of the people who take a part will pass that
part. And, I would guess it will be almost exactly the same 50 percent.
However, to be honest with you, we are in the
process of adding a subscription service. I don't know if you ever go to
ESPN.com but they give a lot of free information (Red Sox beat the Yankees
last night 10-8) but they also have a subscription service where you can
learn about things in more depth for a monthly fee (almost like a
newspaper). Our 2,100 free questions and answers will ALWAYS stay free. But
we found that people really wanted to have some content. If they missed a
question on earnings per share, for example, they wanted to know more about
how convertible bonds are handled in that computation. They didn't feel the
need to pay $2,500 (don't get me started on what I think about that) but
they wanted a bit more information.
To date, we have subscription content for FAR and
Regulation. Each is available for $15 per month which I think is a
reasonable price (especially in a recession). (As an aside, I have long felt
that the high cost of CPA review programs keeps poor people out of the
profession which I think is extremely unfair and even unAmerican.) In our
FAR content, for example, we have 621 slides that cover everything I could
think of that FAR will probably ask about. There are probably more slides in
Regulation but I haven't counted them yet. BEC and Auditing will be ready as
quickly as possible. When you have no paid employees, things only get done
as fast as you can get them done.
Bob, I was delighted to see your name on my email
this morning. I'm actually in Virginia Beach on a 2 day vacation but decided
I'd rather write you than go walk on the beach :). If I can ever address
more questions about textbooks, CPAreviewforFREE, or the Red Sox and the
Yankees, please let me know. As my buddy Paul Clikeman (who is on the AECM
list) will tell you, I am a person of opinion.
Joe
September 25, 2010 for New Zealand Accounting Practitioner Robert Bruce
Walker
-----Original Message-----
From: THE Internet Accounting List/Forum for CPAs [mailto:CPAS-L@LISTSERV.LOYOLA.EDU]
On Behalf Of Robert Bruce Walker
Sent: Sunday, September 26, 2010 5:16 AM
To: CPAS-L@LISTSERV.LOYOLA.EDU
Subject: Re: Question for Joe Hoyle: The Quickest 2011 CPA Exam Breakdown
You'll Ever Read
Interesting thesis from your friend in regard to
teaching method. I must admit, though I attempt to use the Socratic method,
I am suspicious of it. I use it when teaching my staff. I lay out the double
entry and leave the conceptual points empty and invite my employee to
complete the entries. What happens is that I continue to ask the questions
giving more and more away until I lose my temper and complete the exercise
and say: 'There you are. Why can't you do that?!?' This may merely tell you
that I am too impatient, probably true.
The problem with the Socratic method is that it is
based on a proposition related to the nature of knowledge. Plato, or perhaps
Socrates, held the view that true knowledge was innate. It is there from the
moment of birth or earlier and the Socratic method is applied to reveal or
assist to reveal that which lies within the knowledgeable but ill-formed
brain. But then it is concerned only with the knowledge that is true
knowledge, which essentially reduces to a knowledge of mathematics or a
priori deductive 'truth'- we all have, for instance, Pythagoras' theorem in
our heads. Other 'things' are not knowledge. That is material that is
derived from sense experience. Whilst I only have a cursory knowledge of his
work, I think that Chomsky essentially adopts a view that language lies
innate in the human baby for otherwise they could not acquire a facility
with language as rapidly as they do.
I do recall many years ago studying a Platonic
dialogue, I can't even remember its name, in which Socrates attempts to
demonstrate how a geometrical problem can solved by a slave boy simply from
Socrates' questioning. The slave boy doesn't get it. Socrates is reduced to
drawing a picture in the sand. I was taught that this necessity is the
implied concession from Plato that the Socratic method doesn't actually
work.
Does the discipline that is accounting lie latent
and ill-formed in our brains? That might depend on what accounting actually
is. Possibly, as I am essentially innumerate, accounting is the only
mathematical thing I have ever truly understood. Once I saw the essence of
it - I can remember where and when this happened (Putney public library,
London) and from that moment I held my sense of accounting as if a religious
truth. That sense of religiosity has driven everything I have done ever
since. In other words I have a sense of wonder. But then I know that
accounting is as much about words as numbers. It is where the words meet the
numbers, where the numerical ideal of accounting meets the reality of
economic events, that the accountant must stand.
Here is a thought from TS Eliot, the quintessential
Trans-Atlantic soul, in his poem The Hollow Men:
Between the idea And the reality Between the motion
And the act Falls the Shadow
September 26, 2010 reply from Bob Jensen
-----Original Message-----
From: Jensen, Robert
Sent: Sunday, September 26, 2010 5:16 AM
To: CPAS-L@LISTSERV.LOYOLA.EDU
Subject: RE: Question for Joe Hoyle: The Quickest 2011 CPA Exam Breakdown
You'll Ever Read
Hi Robert and Todd,
Socratic Method is the preferred pedagogy in law
schools ---
http://en.wikipedia.org/wiki/Socratic_Method
Psychologists indeed study memory and learning with
particular focus on metacognition ---
http://faculty.trinity.edu/rjensen/265wp.htm
Socratic Method has various metacognitive benefits.
I don't think we should take Socrates/Plato too
literally about latent knowledge. There is some evidence of latent knowledge
such as when three year olds, notably savants, can play music or perform
math tasks they've never been taught.
But accountants and chemists have to be taught. The
question is by what pedagogy? The Socratic Method is more closely aligned
with "learning on your own" using Socratic questions to guide students
learning and reasoning on their own. It engages critical thinking and
reasoning. It is not, however, as efficient as most other pedagogies when a
lot of material must be covered in a short period of time. For example, I
don't particularly recommend the Socratic Method in an audience of 200 CPAs
seeking to quickly pick up tips about updates to the tax code or IFRS in a
six-hour CPE session.
Even though Joe Hoyle attempts Socratic Method by
giving students problems that they must then solve on their own, Joe does
use a textbook that guides their learning asynchronously. He also lectures.
A better example of "learning on your own" is the
BAM pedagogy in intermediate accounting which has demonstrated superiority
for long-term memory in spite of only having one lecture a year and no
textbook. This is closer to the adage that experience is the best teacher.
But "learning on your own" is a painful and slimy-sweat pedagogy when
administered at its best.
Professors Catenach, Croll, and Grinacker received
an AAA Innovation in Accounting Education Award for introducing the BAM
pedagogy in two semesters of Intermediate Accounting at the University of
Virginia. Among other things was a significant increase in performance on
the CPA examination in a program that, under the BAM pedagogy, had no
assigned textbook and taught even less to the CPA examination than before
instigating the BAM pedagogy.
http://faculty.trinity.edu/rjensen/265wp.htm
The undisputed advantage of the BAM pedagogy is
better long-term memory.
BAM is closest to an ideal when combined with
competency-based assessment, although such assessment might be carried too
far in terms of limiting critical thinking learning (if students tend to
rote memorize for their competency-based final examinations) ---
http://faculty.trinity.edu/rjensen/assess.htm#ECA
The BAM pedagogy is probably Socratic Method at
nearly its best in terms of learning. There can be a price to be paid in the
sense that it is more time consuming for students (probably far too much for
a single course taken) and tends to burn out instructors and students. If a
student had to take five simultaneous courses all using the BAM pedagogy,
the top students would probably drop out of college from lack of sleep and
health deterioration.
My threads on alternate pedagogies, including
Mastery Learning, are at ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Mastery Learning, like the BAM pedagogy, burns out students and instructors.
By the way it is not so
simple to test “learning” because the term “learning” is very ambiguous. We
easiest test learning of facts such as a geography test on state capitols or
a spelling bee. We can test problem solving ability such as in a mathematics
test. However, since students vary so much at the beginning of a math
course, it is difficult to measure what the incremental benefit of the
course has been apart from measuring problem solving ability at the start of
the course.
Bob Jensen
September 26, 2010 reply from
Joe Hoyle,
Bob,
I can’t speak for Socrates or Plato about the innate nature of knowledge but I
do think students can be led to figure things out on their own by the use of
carefully sequenced questions. And, isn’t that what we want: for them to
figure things out on their own now so they can figure things out on their own
after they leave our class.
Virtually all of us have been
taught by a standard lecture style so it is difficult to even conceive of
something different. Let me give you an example of a question and answer
class.
After about three weeks of the
semester, I started my sophomore class recently with the following series of
questions. As it happened, there was no reading here. The students pretty
much (but not entirely) started out as blank slates which I think Socrates would
have preferred. I’ll give the questions here; you can figure out how the
students would have answered. I do try to move through these questions at
lightning speed—I want students on the edge of their seats.
--My company owns a few
thousand shares of Ford Motor Company. These shares cost $40,000 but had a fair
value of $65,000. On a set of financial statements, where is this investment
reported?
--Why is it shown as an asset?
--What do I mean by cost?
--What do I mean by fair value?
--Do you think US GAAP allows
my company to make the choice of whether to use cost or fair value for reporting
purposes?
--Okay if US GAAP only allows
one method of reporting, let’s take a class vote on whether FASB would have
picked cost or fair value. (Note – the vote was roughly 50-50.)
--(To a student): You picked
cost – what would be the advantages of reporting cost?
--(To a different student):
You picked fair value – what would be the advantage of reporting fair value?
--Is one method totally right
and one method totally wrong? Is that what we are trying to determine -- right
versus wrong?
--Why did the company make this
investment?
--When will they want to sell
this investment?
--Are they able to sell the
investment immediately if they so choose?
--Can they get roughly $65,000
immediately if they decide to sell?
--US GAAP requires this
investment to be reported at fair value. What does that tell us?
--My company owns two acres of
land that it bought to use for a parking lot at some point in the future. The
land cost $40,000 but had a fair value of $65,000. On a set of financial
statements, where is this investment reported?
--Okay, this is another
asset. Do you think US GAAP allows my company to make the choice of whether to
use cost or fair value for reporting purposes?
--If the land is like the
investment, how will it be reported?
--When will my company choose
to sell this land?
--Will the company be able to
sell the land immediately if it so chooses?
--Can they get roughly $65,000
immediately if they decide to sell the land?
--If they didn’t buy the land
to sell, if they cannot necessarily sell the land immediately, and if there is
no market to create an immediate sale, is there sufficient reason to report the
land at its $65,000 fair value?
--So, investments are reported
at fair value whereas land is reported at cost. Does it surprise you that
these two assets are reported in different ways?
--Let’s take one more and see
if you can figure it out – your company has inventory that has a cost of $40,000
and a fair value of $65,000.
--Did you buy the inventory to
sell or to keep and use?
--Are you sure you can get the
$65,000 right now if you need the money?
--Are you sure you can make a
sale immediately?
--Inventory resembles an
investment in that it was bought in hopes of selling for a gain. However, it
also resembles land in that a sale at a certain amount is not guaranteed without
a formal market. Consequently, whether you use cost or fair value is not
obvious. US GAAP says inventory should be reported at cost (we will later
discuss lower of cost or market). What does that tell us about when we should
report an asset at fair value?
--On our first test, if I gave
you another asset that we have not yet discovered, could you determine whether
it was likely to be reported at cost or fair value?
It took us about 20 minutes to
get this far in the class and every student had to answer at least one question
orally. At the end, I felt that they all had a better understanding of the
reporting of assets. Often students have the view that all accounts report the
same information. I want them to understand that US GAAP requires different
accounts to be reported in different ways and that each way has its own logic
based on the rules of accounting. I want them to be engaged and I want them to
figure as much out for themselves as possible. At the end, I think they know
that investments in stocks are reported at fair value whereas land and inventory
are reported at cost (well, until we discuss lower of cost or market). Better
still, I think they understand why and can make use of that knowledge.
Does it always work? Oh, of
course not. But I do think it gets them thinking about accounting rather than
memorizing accounting. One day in 1991, I switched overnight from lecturing to
asking questions. Try it – you might like it.
Joe
My threads on alternate pedagogies, including Mastery
Learning, are at ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Mastery Learning, like the BAM pedagogy, burns out students and instructors.
My Hero at the American Accounting Association
Meetings in San Antonio on August 13, 2002 --- Amy Dunbar
How to students evaluate Amy Dunbar's online tax courses?
This link is a pdf doc that I will be presenting at a
CPE session with Bob Jensen, Nancy Keeshan, and Dennis Beresford at the AAA on
Tuesday. I updated the paper I wrote that summarized the summer 2001 online
course. You might be interested in the exhibits, particularly Exhibit II,
which summarizes student responses to the learning tools over the two summers.
This summer I used two new learning tools: synchronous classes (I used
Placeware) and RealPresenter videos. My read of the synchronous class comments
is that most students liked having synchronous classes, but not often and not
long ones! 8 of the 57 responding students thought the classes were a waste of
time. 19 of my students, however, didn't like the RealPresenter videos, partly
due to technology problems. Those who did like them, however, really liked
them and many wanted more of them. I think that as students get faster access
to the Internet, the videos will be more useful.
http://www.sba.uconn.edu/users/adunbar/genesis_of_an_online_course_2002.pdf
Amy Dunbar
UConn
Education is an admirable thing, but it is well to
remember from time to time that nothing that is worth learning can be taught.
Oscar Wilde
"The Objective of Education is Learning, Not Teaching (audio version
available)," University of Pennsylvania's Knowledge@Wharton, August 20, 2008
---
http://knowledge.wharton.upenn.edu/article.cfm;jsessionid=9a30b5674a8d333e4d18?articleid=2032
In their book, Turning Learning Right Side
Up: Putting Education Back on Track, authors Russell L. Ackoff and
Daniel Greenberg point out that today's education system is seriously flawed
-- it focuses on teaching rather than learning. "Why should children -- or
adults -- be asked to do something computers and related equipment can do
much better than they can?" the authors ask in the following excerpt from
the book. "Why doesn't education focus on what humans can do better than the
machines and instruments they create?"
"Education is an admirable thing, but it is well to
remember from time to time that nothing that is worth learning can be
taught."
-- Oscar Wilde
Traditional education focuses on teaching, not
learning. It incorrectly assumes that for every ounce of teaching there is
an ounce of learning by those who are taught. However, most of what we learn
before, during, and after attending schools is learned without its being
taught to us. A child learns such fundamental things as how to walk, talk,
eat, dress, and so on without being taught these things. Adults learn most
of what they use at work or at leisure while at work or leisure. Most of
what is taught in classroom settings is forgotten, and much or what is
remembered is irrelevant.
In most schools, memorization is mistaken for
learning. Most of what is remembered is remembered only for a short time,
but then is quickly forgotten. (How many remember how to take a square root
or ever have a need to?) Furthermore, even young children are aware of the
fact that most of what is expected of them in school can better be done by
computers, recording machines, cameras, and so on. They are treated as poor
surrogates for such machines and instruments. Why should children -- or
adults, for that matter -- be asked to do something computers and related
equipment can do much better than they can? Why doesn't education focus on
what humans can do better than the machines and instruments they create?
When those who have taught others are asked who in
the classes learned most, virtually all of them say, "The teacher." It is
apparent to those who have taught that teaching is a better way to learn
than being taught. Teaching enables the teacher to discover what one thinks
about the subject being taught. Schools are upside down: Students should be
teaching and faculty learning.
After lecturing to undergraduates at a major
university, I was accosted by a student who had attended the lecture. After
some complimentary remarks, he asked, "How long ago did you teach your first
class?"
I responded, "In September of 1941."
"Wow!" The student said. "You mean to say you have
been teaching for more than 60 years?"
"Yes."
"When did you last teach a course in a subject that
existed when you were a student?"
This difficult question required some thought.
After a pause, I said, "September of 1951."
"Wow! You mean to say that everything you have
taught in more than 50 years was not taught to you; you had to
learn on your own?"
"Right."
"You must be a pretty good learner."
I modestly agreed.
The student then said, "What a shame you're not
that good a teacher."
The student had it right; what most faculty members
are good at, if anything, is learning rather than teaching. Recall that in
the one-room schoolhouse, students taught students. The teacher served as a
guide and a resource but not as one who force-fed content into students'
minds.
Ways of Learning
There are many different ways of learning; teaching
is only one of them. We learn a great deal on our own, in independent study
or play. We learn a great deal interacting with others informally -- sharing
what we are learning with others and vice versa. We learn a great deal by
doing, through trial and error. Long before there were schools as we know
them, there was apprenticeship -- learning how to do something by trying it
under the guidance of one who knows how. For example, one can learn more
architecture by having to design and build one's own house than by taking
any number of courses on the subject. When physicians are asked whether they
leaned more in classes or during their internship, without exception they
answer, "Internship."
In the educational process, students should be
offered a wide variety of ways to learn, among which they could choose or
with which they could experiment. They do not have to learn different things
the same way. They should learn at a very early stage of "schooling" that
learning how to learn is largely their responsibility -- with the help they
seek but that is not imposed on them.
The objective of education is learning, not
teaching.
There are two ways that teaching is a powerful tool
of learning. Let's abandon for the moment the loaded word teaching, which is
unfortunately all too closely linked to the notion of "talking at" or
"lecturing," and use instead the rather awkward phrase explaining something
to someone else who wants to find out about it. One aspect of explaining
something is getting yourself up to snuff on whatever it is that you are
trying to explain. I can't very well explain to you how Newton accounted for
planetary motion if I haven't boned up on my Newtonian mechanics first. This
is a problem we all face all the time, when we are expected to explain
something. (Wife asks, "How do we get to Valley Forge from home?" And
husband, who does not want to admit he has no idea at all, excuses himself
to go to the bathroom; he quickly Googles Mapquest to find out.) This is one
sense in which the one who explains learns the most, because the person to
whom the explanation is made can afford to forget the explanation promptly
in most cases; but the explainers will find it sticking in their minds a lot
longer, because they struggled to gain an understanding in the first place
in a form clear enough to explain.
The second aspect of explaining something that
leaves the explainer more enriched, and with a much deeper understanding of
the subject, is this: To satisfy the person being addressed, to the point
where that person can nod his head and say, "Ah, yes, now I understand!"
explainers must not only get the matter to fit comfortably into their own
worldview, into their own personal frame of reference for understanding the
world around them, they also have to figure out how to link their frame of
reference to the worldview of the person receiving the explanation, so that
the explanation can make sense to that person, too. This involves an intense
effort on the part of the explainer to get into the other person's mind, so
to speak, and that exercise is at the heart of learning in general. For, by
practicing repeatedly how to create links between my mind and another's, I
am reaching the very core of the art of learning from the ambient culture.
Without that skill, I can only learn from direct experience; with that
skill, I can learn from the experience of the whole world. Thus, whenever I
struggle to explain something to someone else, and succeed in doing so, I am
advancing my ability to learn from others, too.
Learning through Explanation
This aspect of learning through explanation has
been overlooked by most commentators. And that is a shame, because both
aspects of learning are what makes the age mixing that takes place in the
world at large such a valuable educational tool. Younger kids are always
seeking answers from older kids -- sometimes just slightly older kids (the
seven-year old tapping the presumed life wisdom of the
so-much-more-experienced nine year old), often much older kids. The older
kids love it, and their abilities are exercised mightily in these
interactions. They have to figure out what it is that they understand about
the question being raised, and they have to figure out how to make their
understanding comprehensible to the younger kids. The same process occurs
over and over again in the world at large; this is why it is so important to
keep communities multi-aged, and why it is so destructive to learning, and
to the development of culture in general, to segregate certain ages
(children, old people) from others.
Continued in article
Bob Jensen's threads on assessment, learning, and technology in education
are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Question
What types of students benefit most versus least from video lectures?
"Video Lectures May Slightly Hurt Student Performance," by Sophia Li,
Inside Higher Ed, Chronicle of Higher Education, June 21, 2010 ---
http://chronicle.com/blogPost/Video-Lectures-May-Slightly/24963/
No clear winner emerges in the contest between
video and live instruction, according to the
findings of a recent study led by David N.
Figlio, a professor of education and social policy at Northwestern
University. The study found that students who watched lectures online
instead of attending in-person classes performed slightly worse in the
course over all.
A previous
analysis by the U.S. Department of Education that
examined existing research comparing online and live instruction favored
online learning over purely in-person instruction, according to
the working paper
by Mr. Figlio and his colleagues, which was released
this month by the National Bureau of Economic Research.
But Mr. Figlio's study contradicted those results,
showing that live instruction benefits Hispanic students, male students, and
lower-achieving students in particular.
Colleges and universities that are turning to video
lectures because of their institutions' tight budgets may be doing those
students a disservice, said Mark Rush, a professor of economics at the
University of Florida and one of the working paper's authors.
More research will be necessary, however, before
any definite conclusions can be drawn about the effectiveness of video
lectures, said Lu Yin, a graduate student at the University of Florida who
worked on the project. Future research could study the effectiveness of
watching lectures online for topics other than microeconomics, which was the
subject of the course evaluated in the study, Ms. Yin said.
Jensen Comment
Studies like this just do not extrapolate well into the real world, because so
very, very much depends upon both how instructors use videos and how students
use videos. My students had to take my live classes, but my Camtasia video
allowed them to keep going over and over, at their own learning pace, technical
modules (PQQ Possible Quiz Questions) until they got technical things down pat
---
http://www.cs.trinity.edu/~rjensen/video/acct5342/
Students who did not use the videos as intended usually paid a price.
However, some outcomes in the above study conform to my priors. For example,
Brigham Young University (BYU) has very successfully replaced live lectures with
variable-speed video lectures in the first two basic accounting courses ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
However, BYU students most likely have mostly high achieving students to
begin with, especially in accounting. It would be interesting to formally study
the use such variable-speed video in colleges having a higher proportion of
lower-achieving students. My guess is that the variable-speed video lectures
would be less effective with lower-achieving students who are not motivated to
keep replaying videos until they get the technical material down pat. The may be
lower achieving in great measure because they are less motivated learners or
learners who have too many distractions (like supportingchildren) to have as
much quality study time.
And live lecturing/mentoring is hard to put in a single category because
there are so many types of live lecturing/mentoring ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
In conclusion, I think much depends upon the quality of the video versus
lecture, class size, and student motivation. Videos offer the tremendous
advantage of instant replay and being able to adjust to the best learning pace
of the student. Live lectures can, and often do, lead to more human interactive
factors that can be good (if they motivate) and bad (if they distract or instill
dysfunctional fear).
The best video lectures are probably those that are accompanied with instant
messaging with an instructor or tutor that can provide answers or clues to
answers not on the video.
"More Faculty Members Adopt 'Student Centered' Teaching," Chronicle
of Higher Education, October 18, 2009 ---
http://chronicle.com/article/Chart-More-Faculty-Members/48848/
Professors are
warming to new methods of teaching and testing
that experts say are more likely to engage
students, a UCLA survey found last year. Below
are percentages of faculty members who said they
used these approaches in all or most of the
courses they taught. Those trends may continue,
UCLA says, as full professors retire. Assistant
professors were much more likely, for example,
to structure teaching around small groups of
students, while full professors were more likely
to lecture extensively.
|
2005 |
2008 |
Selected
teaching methods |
Cooperative learning (small groups of
students) |
48% |
59% |
Using
real-life problems* |
n/a |
56% |
Group
projects |
33% |
36% |
Multiple
drafts of written work |
25% |
25% |
Student
evaluations of one another’s work |
16% |
24% |
Reflective writing/journaling |
18% |
22% |
Electronic quizzes with immediate
feedback in class* |
n/a |
7% |
Extensive
lecturing (not student-centered) |
55% |
46% |
Selected
examination methods |
Short-answer exams |
37% |
46% |
Term and
research papers |
35% |
44% |
Multiple-choice exams |
32% |
33% |
Grading
on a curve |
19% |
17% |
* Not
asked in the 2005 survey |
Note:
The figures are based on survey
responses of 22,562 faculty members
at 372 four-year colleges and
universities nationwide. The survey
was conducted in the fall and winter
of 2007-8 and covered full-time
faculty members who spent at least
part of their time teaching
undergraduates. The figures were
statistically adjusted to represent
the total population of full-time
faculty members at four-year
institutions. Percentages are
rounded. |
Source: "The American College
Teacher: National Norms for the
2007-8 HERI Faculty Survey,"
University of California at Los
Angeles Higher Education Research
Institute |
Bob Jensen's threads on metacognitive learning ---
http://faculty.trinity.edu/rjensen/265wp.htm
Bob Jensen's threads on higher education are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"Web Surfing in the Classroom: Sound Familiar?" by Catherine Rampell,
Chronicle of Higher Education, May 15, 2008 ---
http://chronicle.com/wiredcampus/index.php?id=3004&utm_source=wc&utm_medium=en
Over at the New York Times’s
Freakonomics blog, Yale Law School professor Ian
Ayres praises the University of Chicago Law School’s decision to
eliminate Internet access in some classrooms. But
more importantly, he recounts an amusing sketch from the Yale’s “Law Revue”
skit night, which is worth sharing in full:
One of the skits had a group of students sitting at
desks, facing the audience, listening to a professor drone on.
All of the students were looking at laptops except
for one, who had a deck of cards and was playing solitaire. The professor
was outraged and demanded that the student explain why she was playing
cards. When she answered “My laptop is broken,” I remember there was
simultaneously a roar of laughter from the student body and a gasp from the
professors around me. In this one moment, we learned that something new was
happening in class.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Random Thoughts (about learning from a retired professor of
engineering) ---
http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Columns.html
Dr. Felder's column in Chemical Engineering Education
Focus is heavily upon active learning and group learning.
Bob Jensen's threads on learning are in the following links:
http://faculty.trinity.edu/rjensen/assess.htm
http://faculty.trinity.edu/rjensen/255wp.htm
http://faculty.trinity.edu/rjensen/265wp.htm
March 3, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]
WHAT LEADS TO ACHIEVING SUCCESS IN DISTANCE
EDUCATION?
"Achieving Success in Internet-Supported
Learning in Higher Education," released February 1, 2005, reports on the
study of distance education conducted by the Alliance for Higher Education
Competitiveness (A-HEC). A-HEC surveyed 21 colleges and universities to
"uncover best practices in achieving success with the use of the Internet
in higher education." Some of the questions asked by the study included:
"Why do institutions move online? Are there
particular conditions under which e-Learning will be successful?"
"What is the role of leadership and by whom?
What level of investment or commitment is necessary for success?"
"How do institutions evaluate and measure
success?"
"What are the most important and successful
factors for student support and faculty support?"
"Where do institutions get stuck? What are the
key challenges?"
The complete report is available online, at no cost,
at http://www.a-hec.org/e-learning_study.html.
The "core focus" of the nonprofit Alliance
for Higher Education Competitiveness (A-HEC) "is on communicating how
higher education leaders are creating positive change by crystallizing their
mission, offering more effective academic programs, defining their role in
society, and putting in place balanced accountability measures." For more
information, go to http://www.a-hec.org/ .
Individual membership in A-HEC is free.
Hi Yvonne,
For what it is worth, my advice to new
faculty is at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm
One thing to remember is that the
employers of our students (especially the public accounting firms) are very
unhappy with our lecture/drill pedagogy at the introductory and intermediate
levels. They believe that such pedagogy turns away top students, especially
creative and conceptualizing students. Employers believe that
lecture/drill pedagogy attracts savant-like memorizers who can recite their
lessons book and verse but have few creative talents and poor prospects for
becoming leaders. The large accounting firms believed this so strongly that they
donated several million dollars to the American Accounting Association for the
purpose of motivating new pedagogy experimentation. This led to the Accounting
Change Commission (AECC) and the mixed-outcome experiments that followed. See http://accounting.rutgers.edu/raw/aaa/facdev/aecc.htm
The easiest pedagogy for faculty is
lecturing, and it is appealing to busy faculty who do not have time for students
outside the classroom. When lecturing to large classes it is even easier because
you don't have to get to know the students and have a great excuse for using
multiple choice examinations and graduate student teaching assistants. I always
remember an economics professor at Michigan State University who said that when
teaching basic economics it did not matter whether he had a live class of 300
students or a televised class of 3,000 students. His full-time teaching load was
three hours per week in front of a TV camera. He was a very good lecturer and
truly loved his three-hour per week job!
Lecturing appeals to faculty because it
often leads to the highest teaching evaluations. Students love faculty who
spoon feed and make learning seem easy. It's much easier when mom or dad
spoon the pudding out of the jar than when you have to hold your own spoon
and/or find your own jar.
An opposite but very effective pedagogy
is the AECC (University of Virginia) BAM Pedagogy that entails live classrooms
with no lectures. BAM instructors think it is more important for students to
learn on their own instead of sitting through spoon-fed learning lectures. I
think it takes a special kind of teacher to pull off the astoundingly successful
BAM pedagogy. Interestingly, it is often some of our best lecturers who decided
to stop lecturing because they experimented with the BAM and found it to be far
more effective for long-term memory. The top BAM enthusiasts are Tony Catanach
at Villanova University and David Croll at the University of Virginia. Note,
however, that most BAM applications have been at the intermediate accounting
level. I have my doubts (and I think BAM instructors will agree) that BAM will
probably fail at the introductory level. You can read about the BAM pedagogy at http://faculty.trinity.edu/rjensen/265wp.htm
At the introductory level we have what
I like to call the Pincus (User Approach) Pedagogy. Karen Pincus is now at the
University of Arkansas, but at the time that her first learning experiments were
conducted, she taught basic accounting at the University of Southern California.
The Pincus Pedagogy is a little like both the BAM and the case method
pedagogies. However, instead of having prepared learning cases, the Pincus
Pedagogy sends students to on-site field visitations where they observe on-site
operations and are then assigned tasks to creatively suggest ways of improving
existing accounting, internal control, and information systems. Like the BAM,
the Pincus Pedagogy avoids lecturing and classroom drill. Therein lies the
controversy. Students and faculty in subsequent courses often complain that the
Pincus Pedagogy students do not know the fundamental prerequisites of basic
accounting needed for intermediate and advanced-level accounting courses.
Two possible links of interest on the controversial Pincus Pedagogy are as
follows:
Where the Pincus Pedagogy and the BAM
Pedagogy differ lies in subject matter itself and stress on creativity. The BAM
focuses on traditional subject matter that is found in such textbooks as
intermediate accounting textbooks. The BAM Pedagogy simply requires that
students learn any way they want to learn on their own since students remember
best what they learned by themselves. The Pincus Pedagogy does not focus on much
of the debit and credit "rules" found in most traditional textbooks.
Students are required to be more creative at the expense of memorizing the
"rules."
The Pincus Pedagogy is motivated by the
belief that traditional lecturing/drill pedagogy at the basic accounting and tax
levels discourages the best and more-creative students to pursue careers in the
accountancy profession. The BAM pedagogy is motivated more by the belief that
lecturing is a poor pedagogy for long-term memory of technical details. What is
interesting is that the leading proponents of getting away from the
lecture/drill pedagogy (i.e., Karen Pincus and Anthony Catenach) were previously
two of the very best lecturers in accountancy. If you have ever heard either of
them lecture, I think you would agree that you wish all your lecturers had been
only half as good. I am certain that both of these exceptional teachers would
agree that lecturing is easier than any other alternatives. However, they do not
feel that lecturing is the best alternative for top students.
Between lecturing and the BAM Pedagogy,
we have case method teaching. Case method teaching is a little like lecturing
and a little like the BAM with some instructors providing answers in case wrap
ups versus some instructors forcing students to provide all the answers. Master
case teachers at Harvard University seldom provide answers even in case wrap
ups, and often the cases do not have any known answer-book-type solutions. The
best Harvard cases have alternative solutions with success being based upon
discovering and defending an alternative solution. Students sometimes
interactively discover solutions that the case writers never envisioned. I
generally find case teaching difficult at the undergraduate level if students do
not yet have the tools and maturity to contribute to case discussions.
Interestingly, it may be somewhat easier to use the BAM at the undergraduate
level than Harvard-type cases. The reason is that BAM instructors are often
dealing with more rule-based subject matter such as intermediate accounting or
tax rather than conceptual subject matter such as strategic decision making,
business valuation, and financial risk analysis.
The hardest pedagogy today is probably
a Socratic pedagogy online with instant messaging communications where an
instructor who's on call about 60 hours per week from his or her home. The
online instructor monitors the chats and team communications between students in
the course at most any time of day or night. Amy Dunbar can tell you about this
tedious pedagogy since she's using it for tax courses and will be providing a
workshop that tells about how to do it and how not to do it. The next scheduled
workshop precedes the AAA Annual Meetings on August 1, 2003 in Hawaii. You can
also hear Dr. Dunbar and view her PowerPoint show from a previous workshop at http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002
In conclusion, always remember that
there is no optimal pedagogy in all circumstances. All learning is
circumstantial based upon such key ingredients as student maturity, student
motivation, instructor talent, instructor dedication, instructor time, library
resources, technology resources, and many other factors that come to bear at
each moment in time. And do keep in mind that how you teach may determine what
students you keep as majors and what you turn away.
I tend to agree with the accountancy
firms that contend that traditional lecturing probably turns away many of the
top students who might otherwise major in accountancy.
At the same time, I tend to agree with
students who contend that they took accounting courses to learn accounting
rather than economics, computer engineering, and behavioral science.
Bob Jensen
-----Original
Message-----
From: Lou&Bonnie [mailto:gyp1@EARTHLINK.NET]
Sent: Thursday, January 16, 2003 5:03 PM
I am a beginning
accounting instructor (part-time) at a local community college. I am applying
for a full-time faculty position, but am having trouble with a question.
Methodology in accounting--what works best for a diversified group of
individuals. Some students work with accounting, but on a computer and have no
understanding of what the information they are entering really means to some
individuals who have no accounting experience whatsoever. What is the best
methodology to use, lecture, overhead, classroom participation? I am not sure
and I would like your feedback. Thank you in advance for your help.
Yvonne
January 20, 2003 reply from Thomas C. Omer
[omer@UIC.EDU]
Don’t forget about
Project Discovery going on at the University of Illinois Champaign-Urbana
Thomas C. Omer
Associate Professor
Department of Accounting University of Illinois At Chicago
The Art of Discovery: Finding the forest in spite of the trees.
Thanks for reminding me Tom. A good
link for Project Discovery is at http://accounting.rutgers.edu/raw/aaa/facdev/aeccuind.htm
January 17, 2003 reply from David R. Fordham
[fordhadr@JMU.EDU]
I'll add an
endorsement to Bob's advice to new teachers. His page should be required
reading for Ph.D.s.
And I'll add one more
tidbit.
Most educators
overlook the distinction between "lectures" and
"demonstrations".
There is probably no
need for any true "lecture" in the field of accounting at the
college level, even though it is still the dominant paradigm at most
institutions.
However, there is
still a great need for "live demonstrations", **especially** at the
introductory level.
Accounting is a
complex process. Introductory students in ANY field learn more about complex
processes from demonstrations than probably any other method.
Then, they move on
and learn more from "practicing" the process, once they've learned
the steps and concepts of the process. And for intermediate and advanced
students, practice is the best place to "discover" the nuances and
details.
While
"Discovery" is probably the best learning method of all, it is
frequently very difficult to "discover" a complex process correctly
from its beginning, on your own. Thus, a quick demonstration can often be of
immense value at the introductory level. It's an efficient way of
communicating sequences, relationships, and dynamics, all of which are present
in accounting processes.
Bottom line: You can
(and should) probably eliminate "lectures" from your classes. You
should not entirely eliminate "demonstrations" from your classes.
Unfortunately, most
education-improvement reform literature does not draw the distinction: anytime
the teacher is doing the talking in front of a class, using blackboard and
chalk or PowerPoint, they label it "lecture" and suggest you don't
do it! This is, in my view, oversimplification, and very bad advice.
Your teaching will
change a whole lot (for the better!) once you realize that students only need
demonstrations of processes. You will eliminate a lot of material you used to
"lecture" on. This will make room for all kinds of other things that
will improve your teaching over the old "lecture" method:
discussions, Socratic dialogs, cases and dilemmas, even some entertainment
here and there.
Plus, the
"lectures" you retain will change character. Take your cue from Mr.
Wizard or Bill Nye the Science Guy, who appear to "lecture" (it's
about the only thing you can do in front of a camera!), but whose entire
program is pretty much devoted to demonstration. Good demonstrations do more
than just demonstrate, they also motivate! Most lectures don't!
Another two pennies
from the verbose one...
David R.
Fordham
PBGH Faculty Fellow
James Madison University
January 16, 2003 message from Peter French [pjfrench@CELESTIAL.COM.AU]
I found this source http://www.thomson.com/swcp/gita.html
and also Duncan Williamson has some very good basic material on his sites http://duncanwil.co.uk/index.htm
; http://www.duncanwil.co.uk/objacc.html
;
Don't forget the world lecture hall at http://www.utexas.edu/world/lecture/
;
This reminds me of how I learned ... the 'real
learning' in the workplace...
I remember my first true life consolidation - 130
companies in 1967. We filled a wall with butchers paper and had 'callers',
'writers' and 'adders' who called out the information to others who wrote out
the entries and others who did the adding. I was 25 and quite scared. The
Finance Director knew this and told me [1] to stick with 'T' accounts to be
sure I was making the right entry - just stick the ones you are sure in and
don't even think about the other entry - it must 'balance' it out; [2] just
because we are dealing with 130 companies and several hundreds of millions of
dollars don't lose sight of the fact that really it is no different from the
corner store. I have never forgotten the simplistic approach. He said - if the
numbers scare you, decimalise them to 100,000's in your mind - it helps ...
and it did. He often used to say the Dr/Cr entries out aloud
I entered teaching aged 48 after having been in
industry and practice for nearly 30 years. Whether i am teaching introductory
accounting, partnership formation/dissolution, consolidations, asset
revaluation, tax affect accounting, I simply write up the same basic entries
on the white board each session - I never use an overhead for this, I always
write it up and say it out aloud, and most copy/follow me - and then recap and
get on with the lesson. I always take time out to 'flow chart' what we are
doing so that they never loose sight of the real picture ... this simple
system works, and have never let my students down.
There have been several movements away form rote
learning in all levels of education - often with disastrous consequences. It
has its place and I am very proud to rely on it. This works and when it isn't
broken, I am not about to try to fix it.
Good luck - it is the greatest responsibility in the
world, and gives the greatest job satisfaction. It is worth every hour and
every grey hair. To realise that you have enabled someone to change their
lives, made a dream come true, eclipses every successful takeover battle or
tax fight that I won i have ever had.
Good luck - may it be to you what is has been to me.
Peter French
January 17, 2003 reply from Michael O'Neil, CPA Adjunct Prof. Weber [Marine8105@AOL.COM]
I am currently teaching high school students, some of
whom will hopefully go on to college. Parents expect you to teach the
children, which really amounts to lecturing, or going over the text material.
When you do this they do not read the textbook, nor do they know how to use
the textbook to answer homework questions. If you don't lecture then the
parents will blame you for "not" teaching their children the
material.
I agree that discovery is the best type of learning,
and the most fun. I teach geometry and accounting/consumer finance. Geometry
leans itself to discovery, but to do so you need certain materials. At our
level (high school) we are also dealing several other issues you don't have at
the college level. In my accounting classes I teach the debit/credit, etc. and
then have them do a lot of work using two different accounting programs. When
they make errors I have them discover the error and correct it. They probably
know very little about posting, and the formatting of financial statements
although we covered it. Before we used the programs we did a lot of pencil
work.
Even when I taught accounting at the college and
junior college level I found students were reluctant to, and not well prepared
to, use their textbooks. Nor were they inclined to DO their homework.
I am sure that many of you have noticed a drop off in
quality of students in the last years. I wish I could tell you that I see that
it will change, but I do not see any effort in that direction. Education
reminds me of a hot air balloon being piloted by people who lease the balloon
and have no idea how to land it. They are just flying around enjoying the
view. If we think in terms of bankruptcy education is ready for Chapter 11.
Mike ONeil
January 17, 2003 reply from Chuck Pier
[texcap@HOTMAIL.COM]
While not in
accounting, I would like to share some information on my wife's experience
with online education. She has a background (10 years) as a public school
teacher and decided to get her graduate degree in library science. Since I was
about to finish my doctoral studies and we knew we would be moving she wanted
to find a program that would allow her to move away and not lose too many
hours in the transfer process. What she found was the online program at the
University of North Texas (UNT) in Denton. Through this program she will be
able to complete a 36 hour American Library Association accredited Master's
degree in Library Science and only spend a total of 9 days on campus. The 9
days are split into a one day session and 2 four day sessions, which can be
combined into 1 five and 1 four day session. Other than these 9 days the
entire course is conducted over the internet. The vast majority is
asynchronous, but there are some parts conducted in a synchronous manner.
She has completed
about 3/4 of the program and is currently in Denton for her last on campus
session. While I often worry about the quality of online programs, after
seeing how much work and time she is required to put in, I don't think I
should worry as much. I can honestly say that I feel she is getting a better,
more thorough education than most traditional programs. I know at a minimum
she has covered a lot more material.
All in all her
experience has been positive and this program fit her needs. I think the MLS
program at UNT has been very successful to date and appears to be growing
quite rapidly. It may serve as a role model for programs in other areas.
Chuck Pier
Charles A.
Pier
Assistant Professor Department of Accounting
Walker College of Business
Appalachian State University
Boone, NC 28608 email: pierca@appstate.edu
828-262-6189
Academic Whores: School
Systems into Lowering Standards for Achievement Tests and Graduation
Some
states are rigging achievement tests to get more money and deceive the public
Will future college graduates in President Obama's home town be able to read and
divide 37/13?
But they will be college "graduates" if community colleges lower standards like
their K-12 counterparts.
President
Obama's American Graduation Initiative
From the
Creative Commons on July 15, 2009 ---
http://creativecommons.org/weblog/entry/15818
President Obama announced yesterday
the American Graduation Initiative,
a twelve billion dollar plan to reform U.S. community
colleges. The initiative calls for five million additional community college
graduates by 2020, and plans that “increase the effectiveness and impact of
community colleges, raise graduation rates, modernize facilities, and create new
online learning opportunities” to aid this goal.
A significant component of the initiative is the plan to “create a new online
skills laboratory.” From the
fact sheet,
“Online educational software has the potential to help students learn more in
less time than they would with traditional classroom instruction alone.
Interactive software can tailor instruction to individual students like human
tutors do, while simulations and multimedia software offer experiential
learning. Online instruction can also be a powerful tool for extending learning
opportunities to rural areas or working adults who need to fit their coursework
around families and jobs. New open online courses will create new routes for
students to gain knowledge, skills and credentials. They will be developed by
teams of experts in content knowledge, pedagogy, and technology and made
available for modification, adaptation and sharing. The Departments of Defense,
Education, and Labor will work together to make the courses freely available
through one or more community colleges and the Defense Department’s distributed
learning network, explore ways to award academic credit based upon achievement
rather than class hours, and rigorously evaluate the results.”
It is important to note here the difference between “open” and simply accessible
“online”. Truly open resources for education are clearly designated as
such with a standard license that allows not only access, but the freedoms to
share, adapt, remix, or redistribute those resources. The educational materials
that make up the new open online courses for this initiative should be open in
this manner, especially since they will result from a government plan. We are
excited about this initiative and hope the license for its educational materials
will allow all of these freedoms. Catherine Casserly, formerly in charge of open
educational resources at the William and Flora Hewlett Foundation (now at the
Carnegie Foundation for the Advancement of Teaching), writes,
“Today at Macomb College, President Barack Obama announced a proposal to commit
$50 million for the development of open online courses for community colleges as
part of the American Graduation Initiative: Stronger American Skills through
Community Colleges. As proposed, the courses will be freely available for use as
is and for adaption as appropriate for targeted student populations. The
materials will carry a Creative Commons license.”
You can
read the official announcement at the White House site on their
blog and visit the briefing room for
the full fact sheet.
Jensen
Comment
Given the troublesome fact that 80% of U.S. college graduates seeking jobs could
not find jobs requiring college degrees, there is much more needed that getting
more students in the U.S. to graduate form college.
July 15,
2009 reply from AMY HAAS
[haasfive@MSN.COM]
Excuse me for bringing up an often overlooked point, but getting students into
community colleges is easy. Getting them to do the college level work needed to
graduate is not! As a instructor at an urban community college for more than 16
years I find that they typical community college student lacks study skills and
or the motivation to succeed. They will come to class but getting them do
actually work outside the classroom, even with tons of online resources
available is often like "pulling teeth". They do not make the time for it.
Amy Haas
July 15
reply from Flowers, Carol
[cflowers@OCC.CCCD.EDU]
I am in agreement with Amy. This piece that Bob published implies to me that
EVERYONE should have a college education. I think that is the problem with
education. This mentality creates, once again, entitlement, not motivation.
Society has taken the motivation that individuals once had, away. Why work for
it when it, when it can be given to you! There is an old
adage................you can lead a horse to water,
but.......................................!!!
I see this as more tax dollars going to waste. I have robust epacks and online
classes, and do students take advantage of it.....some do, most "don't have the
time" -- they are attempting to carry full loads at two schools and work a full
time job. Maybe, we should be funding time management and realistic expectations
programs.
The two examples I had this Easter, were doing poorly -- one was carrying two
full time jobs and a full school load; the other, two full time school loads and
1 1/2 work load . Both felt I was requiring too much and should drop my
standards because of their poor time management. I worked full time and carried
12 units (no social life).............why not more units or work, because I
wanted to be successful. If school takes longer than 4 years to complete, so be
it. I received no help. My family couldn't afford it, so I realized if I wanted
it I had to do it myself. I think many of us can tell the same story and don't
feel it diminished but enhanced our motivation.
July 15,
2009 reply from Patricia Doherty
[pdoherty@BU.EDU]
The "time" factor is another issue entirely, I think. Many of my students (at a
4-year private university) also have jobs, ranging from 10-hour work study to
fill time or nearly so, to afford our astronomical tuition. That's become life.
Should there be more options for them? Yes, I think so. Many of them are very
motivated - one of my summer term students is working full time while attending
school ... and has a 4.0 GPA! Her mom is a single parent with limited means, so
she has to help because she wants to be at this school. My own adult daughter is
back in school. Her financial aid is not full tuition. She also works nearly
full time - and remains on the Dean's List. I am meantime trying to figure out
this year where my husband and I will find the money to meet the rest of the
tuition, because I don't want her to have to drop out. So I completely
understand students who are pressed for time because of work obligations. But
the ones who really want to be there find a way to use the resources available
to them to succeed. For the others, the lack of time to use what you provide is
an excuse, nothing more. They need to find a better reason for not doing well.
July 15,
2009 reply from Ed Scribner
[escribne@NMSU.EDU]
Amy et al.,
I kind of like Zucker’s article that I may have mentioned before:
http://www.ams.org/notices/199608/comm-zucker.pdf
Ed
Ed Scribner New Mexico State University Las Cruces, NM, USA
American RadioWorks: Testing Teachers (radio broadcast) ---
http://americanradioworks.publicradio.org/features/testing_teachers/
"Good and Bad Teachers: How to Tell the Difference," by Nobel Laureate
Gary Becker, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/good-and-bad-teachers-how-to-tell-the-difference-becker.html
"Rating Teachers," by Judge
Richard Posner, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/rating-teachersposner.html
"GRE and SAT validity," by Stephen Hsu, Information Processing,
June 8, 2011 ---
http://infoproc.blogspot.com/2011/06/gre-and-sat-validity.html
GPA-SAT correlations
"Psychometric thresholds for physics and mathematics," by Stephen Hsu and
James Schombert, MIT's Technology Review, May 24, 2010 ---
http://www.technologyreview.com/blog/posts.aspx?bid=354
This is a follow up to our
earlier paper on GPA-SAT correlations. Click below
for the pdf.
Non-linear Psychometric Thresholds for Physics and Mathematics
ABSTRACT
We analyze 5 years of student records at the University of Oregon to
estimate the probability of success (as defined by superior
undergraduate record; sufficient for admission to graduate school)
in Physics and Mathematics as a function of SAT-M score. We find
evidence of a non-linear threshold: below SAT-M score of roughly
600, the probability of success is very low. Interestingly, no
similar threshold exists in other majors, such as Sociology,
History, English or Biology, whether on SAT combined, SAT-R or
SAT-M. Our findings have significant implications for the demographic
makeup of graduate populations in mathematically intensive subjects,
given the current distribution of SAT-M scores.
There is clearly something different about the physics
and math GPA vs SAT distributions compared to all of the other majors we
looked at (see figure 1 in the paper). In the other majors (history,
sociology, etc.) it appears that hard work can compensate for low SAT score.
But that is not the case in math and physics.
One interesting question is whether the apparent cognitive threshold is a
linear or non-linear effect. Our data suggests that the probability of doing
well in any particular quarter of introductory physics may be linear with
SAT-M, but the probability of having a high cumulative GPA in physics
or math is very non-linear in SAT-M. See figure below: the red line is the
upper bound at 95% confidence level on the probability of getting an A in a
particular quarter of introductory physics, and the blue line is the
probability of earning a cumulative GPA of at least 3.5 or so.
Continued in article
Jensen Comment
Near perfection in grade averages is increasing due to grade inflation in both
high school and college ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Hence I would think SAT, ACT, GRE, GMAT, LSAT, and MCAT standardized tests
would be used to further partition graduates with stellar grade averages.
Tests measure cognitive ability, but grades measure motivation as long as
grade inflation does not ruin everything in education.
About ETS Research ---
http://www.ets.org/research
More credit should be give to efforts made my ETS to reduce cultural and
disability factors in testing.
Paying Students to Raise Text Scores ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GMAT
The New GMAT: Part 1
"The New GMAT: Questions for a Data-Rich World,: by: Alison Damast, Business
Week, May 14, 2012 ---
http://www.businessweek.com/articles/2012-05-14/the-new-gmat-questions-for-a-data-rich-world
Editor’s Note: This is the first in a three-part series on the
new GMAT, which makes its official debut on June 5. In this article, we
examine the conceptual building blocks for the test’s new Integrated
Reasoning section.
On a blustery day in February 2009, a group of nine deans and faculty
members from U.S. and European business schools huddled together in a
conference room in McLean, Va., at the Graduate Management Admission
Council’s headquarters. They were there to discuss what would be some of the
most radical changes to the Graduate Management Admission Test (GMAT) in the
exam’s nearly 60-year history.
Luis Palencia, then an associate dean at Spain’s
IESE Business School, was eager to press his case
for the skills he thought today’s MBAs needed to have at their fingertips.
Business students must be able to nimbly interpret and play with data in
graphs, spreadsheets, and charts, using the information to draw swift but
informed conclusions, he told his colleagues.
“The GMAT was not becoming obsolete, but it was
failing to identify the skills which might be important to warrant the
success of our future candidates,” he said in a phone interview from
Barcelona three years later.
By the time the faculty advisory group commenced
two days later, they had come up with a set of recommendations that would
serve as a framework for what would eventually become the new “Integrated
Reasoning” section of the
Next
Generation GMAT, which has been in beta testing
for two years and will be administered to applicants for the first time on
June 5.
Until now, the B-school entrance exam, which was
administered 258,192 times worldwide in 2011, was made up of verbal,
quantitative, and two writing sections. The new section, which replaces one
of the writing sections, is
the biggest change to the GMAT since the shift to
computer-adaptive testing 15 years ago, and one that has been in the works
since 2006, when GMAC first decided to revisit the exam and the skills it
was testing, says Dave Wilson, president and chief executive officer of
GMAC.
“At that time, we got a pretty good handle that the
GMAT was working, but we wanted to know if there was anything that we
weren’t measuring that would provide real value to the schools,” Wilson
says.
It turned out there was a whole slew of new skills
business school faculty believed could be added to the exam. The
recommendations put forth by Palencia and the rest of the committee that
convened in 2009 served as the conceptual building blocks for what a new
section might look like. Later that year, GMAC surveyed nearly 740 faculty
members around the world, from business professors to admissions officers,
who agreed with many of the committee’s findings and suggested that students
needed certain proficiencies to succeed in today’s technologically advanced,
data-driven workplaces.
For example, they gave “high importance” ratings to
skills such as synthesizing data, evaluating data from different sources,
and organizing and manipulating it to solve multiple, interrelated problems,
according to the Next Generation GMAC Skills Survey report.
Those are all examples of skills that can now be
found on the 30-minute Integrated Reasoning section, which GMAC has spent
$12 million developing over the past few years, Wilson says. It will have 12
questions and include pie charts, graphs, diagrams, and data tables. The
section employs four different types of questions that will allow students
to flex their analytical muscles.
Continued in article
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
"Obama’s Union-Friendly,
Feel-Good Approach to Education." by Kyle Olson, Townhall, March 30,
2011 ---
http://townhall.com/columnists/kyleolson/2011/03/30/obama%E2%80%99s_union-friendly,_feel-good_approach_to_education
The Obama administration, principally the president
and Education Secretary Arne Duncan, are now routinely making public
statements which are leading to one conclusion: instead of fixing American
education, we should dumb down the standards.
According to the
Associated Press, President Obama “is pushing a
rewrite of the nation’s education law that would ease some of its rigid
measurement tools” and wants “a test that ‘everybody agrees makes sense’ and
administer it in less pressure-packed atmospheres, potentially every few
years instead of annually.”
The article goes on to say that Obama wants to move
away from proficiency goals in math, science and reading, in favor of the
ambiguous and amorphous goals of student readiness for college and career.
Obama’s new focus comes on the heels of a
New York Times report that 80% of American public
schools could be labeled as failing under the standards of No Child Left
Behind.
Put another way: the standards under NCLB have
revealed that the American public education system is full of cancer.
Instead of treating the cancer, Obama wants to change the test, as if
ignoring the MRI somehow makes the cancer go away.
So instead of implementing sweeping policies to
correct the illness, Obama is suggesting that we just stop testing to
pretend it doesn’t exist.
If Obama were serious about curing the disease, one
of the best things he could do is to ensure that there is a quality teacher
in every classroom in America. Of course, that would mean getting rid
teacher tenure and scrapping seniority rules that favor burned-out teachers
over ambitious and innovative young teachers.
That means standing up to the teacher unions. For a
while, it looked like Obama would get tough with the unions, but not
anymore. With a shaky economy and three wars, it looks like Obama’s
re-election is in serious jeopardy. He needs all hands on deck – thus the
new union-friendly education message.
Obama’s new direction will certainly make the
unionized adults happy. They’ve hated NCLB from the get-go.
And the unions will love Obama’s talk about using
criteria other than standardized testing in evaluating schools.
He doesn’t get specific, of course, but I bet I can
fill in the gaps. If testing is too harsh, perhaps we can judge students and
schools based on how hard they try or who can come up with the most
heart-wrenching excuse for failure or how big the dog was that ate their
homework.
Continued in article
"Department
of Injustice," by Walter E. Williams, Townhall, March 30. 2011 ---
http://townhall.com/columnists/walterewilliams/2011/03/30/department_of_injustice
One of the requirements to become a Dayton, Ohio
police officer is to successfully pass the city's two-part written
examination. Applicants must correctly answer 57 of 86 questions on the
first part (66 percent) and 73 of 102 (72 percent) on the second part.
Dayton's Civil Service Board reported that 490 candidates passed the
November 2010 written test, 57 of whom were black. About 231 of the roughly
1,100 test takers were black.
The U.S. Department of Justice, led by Attorney
General Eric Holder, rejected the results of Dayton's Civil Service
examination because not enough blacks passed. The DOJ has ordered the city
to lower the passing score. The lowered passing grade requires candidates to
answer 50 of 86 (58 percent) questions correctly on the first part and 64 of
102 (63 percent) of questions on the second. The DOJ-approved scoring policy
requires potential police officers to earn the equivalent of an "F" on the
first part and a "D" on the second. Based on the DOJ-imposed passing scores,
a total of 748 people, 258 more than before, were reported passing the exam.
Unreported was just how many of the 258 are black.
Keith Lander, chairman of the Dayton chapter of the
Southern Christian Leadership Conference, and Dayton NAACP president Derrick
Foward condemned the DOJ actions.
Mr. Lander said, "Lowering the test score is
insulting to black people," adding, "The DOJ is creating the perception that
black people are dumb by lowering the score. It's not accomplishing
anything."
Mr. Foward agreed and said, "The NAACP does not
support individuals failing a test and then having the opportunity to be
gainfully employed," adding, "If you lower the score for any group of
people, you're not getting the best qualified people for the job."
I am pleased by the positions taken by Messrs.
Lander and Foward. It is truly insulting to suggest that black people cannot
meet the same standards as white people and somehow justice requires lower
standards. Black performance on Dayton's Civil Service exam is really a
message about fraudulent high school diplomas that many black students
receive.
Continued in article
Assessment often gets caught in a tug of war between
accountability and improvement.
The Next Great Hope for Measuring Learning ---
http://www.chronicle.com/article/The-Next-Great-Hope-for/238075?cid=at&utm_source=at&utm_medium=en&elqTrackId=49382afe872f46a0b64064c090db9e53&elq=152fd248a4d244b6a1dfcf39b37cbd7c&elqaid=11117&elqat=1&elqCampaignId=4277
Jensen Comment
When it comes to assessment I tend to think of how I want my brain surgeon to be
assessed before he sticks something hard and sharp into my gray matter. I guess
the accountant in me leans toward accountability
"Racial Stupidity and Malevolence,"
by Walter E. Williams, Townhall, September 8, 2010 ---
http://townhall.com/columnists/WalterEWilliams/2010/09/08/racial_stupidity_and_malevolence
The white liberal's agenda, coupled with that of
black race hustlers, has had and continues to have a devastating impact on
ordinary black people. Perhaps the most debilitating aspect of this liberal
malevolence is in the area of education.
Recently, I spoke with a Midwestern university
engineering professor who was trying to help an inner-city black student who
was admitted to the university's electrical engineering program. The student
was sure that he was well prepared for an engineering curriculum; his high
school had convinced him of that and the university recruiters supported
that notion. His poor performance on the university's math placement exam
required that he take remedial math courses. He's failed them and is now on
academic probation after two semesters of earning less than a 2.0 grade
point average.
The young man and his parents were sure of his
preparedness. After all, he had good high school grades, but those grades
only meant that he was well behaved. The college recruiters probably knew
this youngster didn't have the academic preparation for an electrical
engineering curriculum. They were more concerned with racial diversity.
This young man's background is far from unique.
Public schools give most black students fraudulent diplomas that certify a
12th-grade achievement level. According to a report by Abigail Thernstrom,
"The Racial Gap in Academic Achievement," black students in 12th grade dealt
with scientific problems at the level of whites in the sixth grade; they
wrote about as well as whites in the eighth grade. The average black high
school senior had math skills on a par with a typical white student in the
middle of ninth grade. The average 17-year-old black student could only read
as well as the typical white child who had not yet reached age 13.
Black youngsters who take the SAT exam earn an
average score that's 70 to 80 percent of the score of white students, and
keep in mind, the achievement level of white students is nothing to write
home about. Under misguided diversity pressures, colleges recruit many black
students who are academically ill equipped. Very often, these students
become quickly disillusioned, embarrassed and flunk out, or they're steered
into curricula that have little or no academic content, or professors
practice affirmative-action grading. In any case, the 12 years of poor
academic preparation is not repaired in four or five years of college. This
is seen by the huge performance gap between blacks and whites on exams for
graduate school admittance such as the GRE, MCAT and LSAT.
Is poor academic performance among blacks something
immutable or pre-ordained? There is no evidence for such a claim. Let's
sample some evidence from earlier periods. In "Assumptions Versus History in
Ethnic Education," in Teachers College Record (1981), Dr. Thomas Sowell
reports on academic achievement in some of New York city's public schools.
He compares test scores for sixth graders in Harlem schools with those in
the predominantly white Lower East Side for April 1941 and December 1941.
In paragraph and word meaning, Harlem students,
compared to Lower East Side students, scored equally or higher. In 1947 and
1951, Harlem third-graders in paragraph and word meaning, and arithmetic
reasoning and computation scored about the same as -- and in some cases,
slightly higher, and in others, slightly lower than -- their white Lower
East Side counterparts.
Going back to an earlier era, Washington, D.C.'s
Dunbar High School's black students scored higher in citywide tests than any
of the city's white schools. In fact, from its founding in 1870 to 1955,
most of Dunbar's graduates went off to college.
Let's return to the tale of the youngster at the
Midwestern college. Recruiting this youngster to be a failure is cruel,
psychologically damaging and an embarrassment for his family. But the campus
hustlers might come to the aid of the student by convincing him that his
academic failure is a result of white racism and Eurocentric values.
Bob Jensen's threads on grade inflation
are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
"GRE and SAT validity," by Stephen Hsu, Information Processing,
June 8, 2011 ---
http://infoproc.blogspot.com/2011/06/gre-and-sat-validity.html
GPA-SAT correlations
"Psychometric thresholds for physics and mathematics," by Stephen Hsu and
James Schombert, MIT's Technology Review, May 24, 2010 ---
http://www.technologyreview.com/blog/posts.aspx?bid=354
This is a follow up to our
earlier paper on GPA-SAT correlations. Click below
for the pdf.
Non-linear Psychometric Thresholds for Physics and Mathematics
ABSTRACT
We analyze 5 years of student records at the University of Oregon to
estimate the probability of success (as defined by superior
undergraduate record; sufficient for admission to graduate school)
in Physics and Mathematics as a function of SAT-M score. We find
evidence of a non-linear threshold: below SAT-M score of roughly
600, the probability of success is very low. Interestingly, no
similar threshold exists in other majors, such as Sociology,
History, English or Biology, whether on SAT combined, SAT-R or
SAT-M. Our findings have significant implications for the demographic
makeup of graduate populations in mathematically intensive subjects,
given the current distribution of SAT-M scores.
There is clearly something different about the physics
and math GPA vs SAT distributions compared to all of the other majors we
looked at (see figure 1 in the paper). In the other majors (history,
sociology, etc.) it appears that hard work can compensate for low SAT score.
But that is not the case in math and physics.
One interesting question is whether the apparent cognitive threshold is a
linear or non-linear effect. Our data suggests that the probability of doing
well in any particular quarter of introductory physics may be linear with
SAT-M, but the probability of having a high cumulative GPA in physics
or math is very non-linear in SAT-M. See figure below: the red line is the
upper bound at 95% confidence level on the probability of getting an A in a
particular quarter of introductory physics, and the blue line is the
probability of earning a cumulative GPA of at least 3.5 or so.
Continued in article
Jensen Comment
Near perfection in grade averages is increasing due to grade inflation in both
high school and college ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Hence I would think SAT, ACT, GRE, GMAT, LSAT, and MCAT standardized tests
would be used to further partition graduates with stellar grade averages.
Tests measure cognitive ability, but grades measure motivation as long as
grade inflation does not ruin everything in education.
About ETS Research ---
http://www.ets.org/research
More credit should be give to efforts made my ETS to reduce cultural and
disability factors in testing.
Paying Students to Raise Text Scores ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GMAT
Some
states are rigging achievement tests to get more money and deceive the public
Will future college graduates in President Obama's home town be able to read and
divide 37/13?
But they will be college "graduates" if community colleges lower standards like
their K-12 counterparts.
"Second
City Ruse: How states like Illinois rig school tests to hype phony
achievement," The Wall Street Journal, July 18, 2009 ---
http://online.wsj.com/article/SB124786847585659969.html#mod=djemEditorialPage
When President Obama chose Arne Duncan to lead the Education Department, he
cited Mr. Duncan's success as head of Chicago's public school system from 2001
to 2008. But a new education study suggests that those academic gains aren't
what they seemed. The study also helps explain why big-city education reform is
unlikely to occur without school choice.
Mr. Obama noted in December that "in just seven years, Arne's boosted elementary
test scores here in Chicago from 38% of students meeting the standard to 67%"
and that "the dropout rate has gone down every year he's been in charge." But
according to "Still Left Behind," a report by the Civic Committee of the
Commercial Club of Chicago, a majority of Chicago public school students still
drop out or fail to graduate with their class. Moreover, "recent dramatic gains
in the reported number of CPS elementary students who meet standards on state
assessments appear to be due to changes in the tests . . . rather than real
improvements in student learning."
Our point here isn't to pick on Mr. Duncan, but to illuminate the ease with
which tests can give the illusion of achievement. Under the 2001 No Child Left
Behind law, states must test annually in grades 3 through 8 and achieve 100%
proficiency by 2014. But the law gives states wide latitude to craft their own
exams and to define math and reading proficiency. So state tests vary widely in
rigor, and some have lowered passing scores and made other changes that give a
false impression of academic success.
The new Chicago report explains that most of the improvement in elementary test
scores came after the Illinois Standards Achievement Test was altered in 2006 to
comply with NCLB. "State and local school officials knew that the new test and
procedures made it easier for students throughout the state -- and throughout
Chicago -- to obtain higher marks," says the report.
Chicago students fared much worse on national exams that weren't designed by
state officials. On the 2007 state test, for example, 71% of Chicago's 8th
graders met or exceeded state standards in math, up from 32% in 2005. But
results from the National Assessment of Educational Progress exam, a federal
standardized test sponsored by the Department of Education, show that only 13%
of the city's 8th graders were proficient in math in 2007. While that was better
than 11% in 2005, it wasn't close to the 39 percentage-point increase reflected
on the Illinois state exam.
In Mr. Duncan's defense, he wasn't responsible for the new lower standards,
which were authorized by state education officials. In 2006, he responded to a
Chicago Tribune editorial headlined, "An 'A' for Everybody!" by noting
(correctly) that "this is the test the state provided; this is the state
standard our students were asked to meet." But this doesn't change the fact that
by defining proficiency downward, states are setting up children to fail in high
school and college. We should add that we've praised New York City test results
that the Thomas B. Fordham Institute also claims are inflated, but we still
favor mayoral control of New York's schools as a way to break through the
bureaucracy and drive more charter schools.
And speaking of charters, the Chicago study says they "provide one bright spot
in the generally disappointing performance of Chicago's public schools." The
city has 30 charters with 67 campuses serving 30,000 students out of a total
public school population of 408,000. Another 13,000 kids are on wait lists
because the charters are at capacity, and it's no mystery why. Last year 91% of
charter elementary schools and 88% of charter high schools had a higher
percentage of students meeting or exceeding state standards than the
neighborhood schools that the students otherwise would have attended.
Similar results have been observed from Los Angeles to Houston to Harlem. The
same kids with the same backgrounds tend to do better in charter schools, though
they typically receive less per-pupil funding than traditional public schools.
In May, the state legislature voted to increase the cap on Chicago charter
schools to 70 from 30, though Illinois Governor Pat Quinn has yet to sign the
bill.
Chicago Mayor Richard Daley deserves credit for hiring Mr. Duncan, a charter
proponent. But in deference to teachers unions that oppose school choice, Mr.
Daley stayed mostly silent during the debate over the charter cap. That's
regrettable, because it's becoming clear that Chicago's claim of reform success
among noncharter schools is phony.
Bob
Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
Bob
Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
One Impact of Higher Admission Standards --- Less Revenue
"New Approach at U. of Phoenix Drives Down Parent Company's Stock,"
Inside Higher Ed, March 30, 2011 ---
http://www.insidehighered.com/news/2011/03/30/qt#255383
The Apollo Group on Tuesday
announced a quarterly loss and enrollment declines
at the University of Phoenix that were largely attributable to changes in
the for-profit institution's policies aimed at ensuring that more of the
students it enrolls can succeed academically. The company's announcement of
its second quarter results drove down its stock price,
Bloomberg reported. Apollo saw enrollment of new
students in University of Phoenix degree programs fall by 45 percent from a
year ago, and said its policy of requiring new students with few academic
credits to enroll in a free orientation program to see if they are cut out
for college-level work had suppressed enrollments in the short term but put
it "on a path of more consistently delivering high quality growth" in the
future. Phoenix, as the biggest and most visible player in the for-profit
higher education sector, has been under intense scrutiny amid discussion of
increased federal regulation, and it has put in place a series of changes
(including changing how it compensates recruiters),
its officials have said, to try to lead the
industry in a new direction.
Bob Jensen's threads on for-profit universities ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud
Performance Evaluation
Let's face it! Accounting, professors' job performance, and vegetable nutrition
have a lot systemic problems in common ---
http://faculty.trinity.edu/rjensen/FraudConclusion.htm#BadNews
American Council on Education - GED Testing ---
http://www.acenet.edu/Content/NavigationMenu/ged/index.htm
"Why I Hate Annual Evaluations," by Ben Yagoda, Chronicle of Higher
Education, March 28, 2010 ---
http://chronicle.com/article/Why-I-Hate-Annual-Evaluations/64815/
There are three things I don't like about my job.
Two of them are pretty obvious and completely unoriginal: correcting papers
and attending department meetings. The third thing is somewhat obvious as
well, but I hesitate to name it, for fear that it will make me look whiny.
However, that battle has probably already been
lost, so here goes: I hate my annual evaluation.
To the extent that this evaluation is necessary, it
is because of the collective-bargaining agreement between the University of
Delaware and our campus chapter of the American Association of University
Professors. As long as I've been here—going on 18 years—the agreement has
divided our annual pay raises into two parts. The first part is across the
board. This year our raise was 4 percent, of which 1.5 percent was across
the board, meaning, for example, that a full professor making the minimum
salary of about $85,000 got a raise of about $1,275.
The other part of the raise is based on "merit,"
and it works as follows. The average faculty salary is calculated. Say it is
$100,000. Every unit gets a pot of cash equivalent to 2.5 percent, or
$2,500, multiplied by the number of faculty members in the unit. In my unit,
the English department, that would be roughly 50 bodies. The chairman of the
department evaluates each professor's performance. The professor who is
precisely in the middle gets a $2,500 merit raise. Those rated higher will
get more, those rated lower will get less, but the average merit raise has
to be $2,500.
In other words, no department can be a Lake Wobegon,
where all the children are above average.
On paper, this all seems reasonable, and I freely
admit that part of my outsized resentment of the process stems from my own
quirks. It requires a lot of paperwork and rewards good record keeping. I
despise paperwork and am truly terrible at record keeping. (It is a cruel
twist of fate in my world that evaluation time and tax time arrive
together.) My early experience in the working world taught me that I also
deeply and irrationally resent being judged by a boss, which is probably the
main reason why, before becoming an academic, I was a freelance writer and
thus my own boss. Now here I am being evaluated by the department chair, who
isn't really my boss, but at this point the difference seems negligible.
But I maintain that some of my gripes have
objective merit. American colleges and universities, including the
University of Delaware, still view faculty members as a group of scholars
and teachers devoted to and bound by self-instilled standards of excellence.
Tenure, as long as it continues to exist, must and does require evaluation.
But—crucially—at Delaware and elsewhere, that evaluation and judgment are
performed not by the chair but by one's peers (ultimately ratified or not,
to be sure, by provosts, presidents, and other higher-ups).
For faculty members who will eventually go up for
tenure, it definitely makes sense to get input from as many sources as
possible, so I'll grant that for them an annual evaluation by the chair
makes sense. But for tenured faculty members? No—at least not the way we do
it at my university.
Every year around this time, we submit our
materials—publications, syllabi, evidence of service, and so forth—and fill
out a Web form. The chair, who has meanwhile received copies of students'
evaluations of our teaching, rates all of us on a scale of 1 (the worst) to
9 (the best) in scholarship, service, and teaching. Different percentages
are accorded to each area based on an elaborate formula, but generally
speaking, for tenured and tenure-track professors, scholarship counts for
roughly 50 percent, teaching 40 percent, and service 10 percent.
The whole thing is undignified and unseemly. What,
exactly, is the difference between a 5 and 7 in service? Number of
committees served on? Hours spent? Scholarship is even more thorny, because
as everyone knows, an article does not equal an article. Do two short
articles in PMLA equal a New York Review of Books mega-essay, or do I have
to throw in a draft choice and a player to be named later? Number of words
produced and place of publication are important, to be sure, but quality
trumps them both. And how can our chair be expected to judge the quality of
the work of every faculty member, some of whom work in fields very different
from his? The answer is he can't.
Evaluating teaching has its own well-documented set
of problems. We honor faculty autonomy to the extent that evaluators are not
welcome in another professor's classroom, and we are still a good distance
away from giving students No Child Left Behind tests that would "assess" the
extent to which a certain course has achieved its "goals." That's well and
good, but it doesn't leave much as a basis for judgment. There are syllabi
and the narrative Teaching Statements we provide each year, and sometimes
the evidence of a new course devised and designed, but the main thing used
to assess teaching are student evaluations. Those have some value, but they
are most assuredly not the whole story when it comes to the quality of one's
teaching. If they were, we might as well outsource the whole process to
RateMyProfessors.com.
The unseemliness multiplies when my colleagues (as
they often do) complain loudly and frequently about the marks they have
gotten. I would be embarrassed to tell you how many laments I have listened
to along the lines of, "I published a book, and he only gave me a 7!" I
would bet our students don't kvetch as much about their grades.
And what are the consequences of our evaluations?
In the 50-40-10 scholarship-teaching-service ratio, the difference between a
7 and a 9 rating in scholarship is about $540 a year. After taxes, that
comes out to maybe $400 a year, or $8 a week. Not only is that not much, but
for almost everyone, it gets evened out over time; some years, you can
expect to get maybe a little lower rating than you "really" deserve, some
years a little higher. For this my colleagues gnash their teeth and lose
sleep?
Several years ago, I came up with another way to
evaluate faculty performance, based on the understanding that we all expect
excellent work from ourselves and one another. Take the average merit raise
and give almost everyone in the department a raise slightly lower than that;
in the example I've been working with, that could be $2,300. That way, a
handful of colleagues who publish major books or get major awards or stellar
teaching evaluations can receive a slightly higher raise. And if a couple of
people are blatantly not carrying their weight, they can get a little less.
I proposed my idea at a department meeting, and it
was summarily shot down. My explanation for this is Freud's notion of the
narcissism of small differences—our need to exaggerate the minimal
distinctions between ourselves and people very much like ourselves.
Even as I write, we are negotiating our next
collective-bargaining agreement. Word on the street is that salaries will be
frozen for next year. If that happens, I will be secretly glad, and you know
why: It could very possibly mean no annual evaluation!
Ben Yagoda is a professor of English at the
University of Delaware and author, most recently, of Memoir: A History
(Riverhead Books, 2009). His blog on higher education is at
http://campuscomments.wordpress.com
Bob Jensen's threads on higher education are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"What Gets Measured in Education," by Alan Kantrow, Harvard
Business Review Blog, October 8, 2013 ---
Click Here
http://blogs.hbr.org/2013/10/what-gets-measured-in-education/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+harvardbusiness+%28HBR.org%29&cm_ite=DailyAlert-100913+%281%29&cm_lm=sp%3Arjensen%40trinity.edu&cm_ven=Spop-Email
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Assessment often gets caught in a tug of war between
accountability and improvement.
The Next Great Hope for Measuring Learning ---
http://www.chronicle.com/article/The-Next-Great-Hope-for/238075?cid=at&utm_source=at&utm_medium=en&elqTrackId=49382afe872f46a0b64064c090db9e53&elq=152fd248a4d244b6a1dfcf39b37cbd7c&elqaid=11117&elqat=1&elqCampaignId=4277
Jensen Comment
When it comes to assessment I tend to think of how I want my brain surgeon to be
assessed before he sticks something hard and sharp into my gray matter. I guess
the accountant in me leans toward accountability.
"Study: Little Difference in Learning in Online and In-Class Science
Courses," Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses
A
study in Colorado has found little difference in
the learning of students in online or in-person introductory science
courses. The study tracked community college students who took science
courses online and in traditional classes, and who then went on to four-year
universities in the state. Upon transferring, the students in the two groups
performed equally well. Some science faculty members have expressed
skepticism about the ability of online students in science, due to the lack
of group laboratory opportunities, but the programs in Colorado work with
companies to provide home kits so that online students can have a lab
experience.
Jensen Comment
Firstly, note that online courses are not necessarily mass education (MOOC)
styled courses. The student-student and student-faculty interactions can be
greater online than onsite. For example, my daughter's introductory chemistry
class at the University of Texas had over 600 students. On the date of the final
examination he'd never met her and had zero control over her final grade. On the
other hand, her microbiology instructor in a graduate course at the University
of Maine became her husband over 20 years ago.
Another factor is networking. For example, Harvard Business School students
meeting face-to-face in courses bond in life-long networks that may be stronger
than for students who've never established networks via classes, dining halls,
volley ball games, softball games, rowing on the Charles River, etc. There's
more to lerning than is typically tested in competency examinations.
My point is that there are many externalities to both onsite and online
learning. And concluding that there's "little difference in learning" depends
upon what you mean by learning. The SCALE experiments at the University of
Illinois found that students having the same instructor tended to do slightly
better than onsite students. This is partly because there are fewer logistical
time wasters in online learning. The effect becomes larger for off-campus
students where commuting time (as in Mexico City) can take hours going to and
from campus.
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
Khan Academy for Free Tutorials (now including accounting tutorials)
Available to the Masses ---
http://en.wikipedia.org/wiki/Khan_Academy
A Really Misleading Video
Do Khan Academy Videos Promote “Meaningful Learning”?
Click Here
http://www.openculture.com/2012/06/expert_gently_asks_whether_khan_academy_videos_promote_meaningful_learning.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29
If you ever
wondered whether professional scientists are skeptical
about some of the incredibly fun, attractive and brief
online videos that purport to explain scientific
principles in a few minutes, you’d be right.
Derek
Muller completed his
doctoral dissertation by
researching the question of what makes for effective
multimedia to teach physics. Muller curates the science
blog
Veritasium and received his
Ph.D. from the University of Sydney in 2008.
It’s no small irony that Muller’s argument, that online
instructional videos don’t work, has reached its biggest
audience in the form of an
online video.
He launches right in, lecture style, with a gentle
attack on the
Khan Academy, which has
famously flooded the Internet with free instructional
videos on every subject from arithmetic to finance.
While
praising the academy’s founder, Salman Khan, for his
teaching and speaking talent, Muller contends that
students actually don’t learn anything from science
videos in general.
In
experiments, he asked subjects to describe the force
acting upon a ball when a juggler tosses it into the
air. Then he showed them a short video that explained
gravitational force.
In tests
taken after watching the video, subjects provided
essentially the same description as before. Subjects
said they didn’t pay attention to the video because they
thought they already knew the answer. If anything, the
video only made them more confident about their own
ideas.
Science instructional videos, Muller argues, shouldn’t
just explain correct information, but should tackle
misconceptions as well. He practices this approach in
his own work, like this film about
weightlessness in the space station.
Having to work harder to think
through why an idea is wrong, he says, is just as
important as being told what’s right.
Jensen Comment
In my viewpoint learning efficiency and effectiveness is so complicated in a
multivariate sense that no studies, including Muller's experiments, can be
extrapolated to the something as vast as the Khan Academy.
For example, the learning from a given tutorial depends immensely on the
aptitude of the learner and the intensity of concentration and replay of the
tutorial.
For example, learning varies over time such as when a student is really bad
at math until a point is reached where that student suddenly blossoms in math.
For example, the learning from a given tutorial depends upon the ultimate
testing expected.
What they learn depends upon how we test:
I consider Muller's video misleading and superficial.
Here are some documents on the multivariate complications of
the learning process:
Khan Academy ---
http://en.wikipedia.org/wiki/Khan_Academy
The Trouble With Derek Muller
The trouble with Robert Talbot is that he relies on Derek Muller's superficial
experiments on undergraduates and then extrapolates the findings to the entire
world. He's Exhibit A about what we warn doctoral students about when they are
learning how to conduct research and write up results of research.
In my viewpoint learning efficiency and effectiveness of any pedagogy is so
complicated in a multivariate sense that no studies, including Muller's
experiments, can be extrapolated to the something as vast as the Khan Academy.
For example, the learning from a given tutorial depends immensely on the
aptitude of the learner and the intensity of concentration and replay of the
tutorial.
For example, learning varies over time such as when a student is really bad
at math until a point is reached where that student suddenly blossoms in math.
For example, the learning from a given tutorial depends upon the ultimate
testing expected.
What they learn depends upon how we test:
It all boils down to how badly a student wants to learn something like how to
take the derivative of a polynomial. Chances are that if a student is totally
motivated and intent on learning this process, he or she can keep studying and
re-studying Khan Academy videos for mastery learning far beyond what most any
other pedagogy on this subject can offer.
The writings of Derek Muller are too superficial for my liking. Of course,
learning from the Khan Academy can be superficial if the students are not
intently focused on really, really wanting to learn. So what does that prove
about the students who are intently focused on really, really wanting to learn?
The Kahn Academy is really intended for students who
really, really want to learn. Don't knock it just because it doesn't work as
well for unmotivated students used in superficial experiments.
A Really, Really Misleading Video
Do Khan Academy Videos Promote “Meaningful Learning”?
Click Here
http://www.openculture.com/2012/06/expert_gently_asks_whether_khan_academy_videos_promote_meaningful_learning.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29
A Really Misleading Article
"The trouble with Khan Academy," by Robert Talbert, Chronicle of Higher
Education, July 3, 2012
http://chronicle.com/blognetwork/castingoutnines/2012/07/03/the-trouble-with-khan-academy/?cid=wc&utm_source=wc&utm_medium=en
Competency-Based Programs (where instructors do not assign the grades) Can
Work Well But Do Not Always Work Well
A Research Report
"Competency-Based Degree Programs in the U.S. Postsecondary Credentials for
Measurable Student Learning and Performance," Council on Adult and Experiential
Learning," 2012 ---
http://www.cael.org/pdfs/2012_CompetencyBasedPrograms
American RadioWorks: Testing Teachers (radio broadcast) ---
http://americanradioworks.publicradio.org/features/testing_teachers/
It's About Time
"Settlement Reached in Essay-Mill Lawsuit." by Paige Chapman,
Chronicle of Higher Education, October 25, 2010 ---
http://chronicle.com/blogs/wiredcampus/settlement-reached-in-essay-mill-lawsuit/27852?sid=wc&utm_source=wc&utm_medium=en
Bob Jensen's threads about academic cheating ---
http://faculty.trinity.edu/rjensen/Plagiarism.htm
Questions
Should a doctoral student be allowed to hire an editor to help write her
dissertation?
If the answer is yes, should this also apply to any student writing a course
project, take home exam, or term paper?
Answer
Forwarded by Aaron Konstam
"Academic Frauds," The Chronicle of Higher Education, November
3, 2003 --- http://chronicle.com/jobs/2003/11/2003110301c.htm
Question (from "Honest John"): I'm a
troubled member of a dissertation committee at Private U, where I'm not a
regular faculty member (although I have a doctorate). "Bertha" is a
"mature" student in chronological terms only. The scope of her
dissertation research is ambiguous, and the quality of her proposal is
substandard. The committee chair just told me that Bertha is hiring an editor
to "assist" her in writing her dissertation. I'm outraged. I've
complained to the chair and the director of doctoral studies, but if Bertha is
allowed to continue having an "editor" to do her dissertation,
shouldn't I report the university to an accreditation agency? This is too big
a violation of integrity for me to walk away.
Answer: Ms. Mentor shares your outrage -- but first,
on behalf of Bertha, who has been betrayed by her advisers.
In past generations, the model of a modern
academician was a whiz-kid nerd, who zoomed through classes and degrees, never
left school, and scored his Ph.D. at 28 or so. (Nietzsche was a full professor
at 24.) Bertha is more typical today. She's had another life first.
Most likely she's been a mom and perhaps a
blue-collar worker -- so she knows about economics, time management, and child
development. Maybe she's been a musician, a technician, or a mogul -- and now
wants to mentor others, pass on what she's known. Ms. Mentor hears from many
Berthas.
Returning adult students are brave. "Phil"
found that young students called him "the old dude" and snorted when
he spoke in class. "Barbara" spent a semester feuding with three
frat boys after she told them to "stop clowning around. I'm paying good
money for this course." And "Millie's" sister couldn't
understand her thirst for knowledge: "Isn't your husband rich enough so
you can just stay home and enjoy yourself?"
Some tasks, Ms. Mentor admits, are easier for the
young -- pole-vaulting, for instance, and pregnancy. Writing a memoir is
easier when one is old. And no one under 35, she has come to suspect, should
give anyone advice about anything. But Bertha's problem is more about academic
skills than age.
Her dissertation plan may be too ambitious, and her
writing may be rusty -- but it's her committee's job to help her. All
dissertation writers have to learn to narrow and clarify their topics and pace
themselves. That is part of the intellectual discipline. Dissertation writers
learn that theirs needn't be the definitive word, just the completed one, for
a Ph.D. is the equivalent of a union card -- an entree to the profession.
But instead of teaching Bertha what she needs to
know, her committee (except for Honest John) seems willing to let her hire a
ghost writer.
Ms. Mentor wonders why. Do they see themselves as
judges and credential-granters, but not teachers? Ms. Mentor will concede that
not everyone is a writing genius: Academic jargon and clunky sentences do give
her twitching fits. But while not everyone has a flair, every academic must
write correct, clear, serviceable prose for memos, syllabuses, e-mail
messages, reports, grant proposals, articles, and books.
Being an academic means learning to be an academic
writer -- but Bertha's committee is unloading her onto a hired editor, at her
own expense. Instead of birthing her own dissertation, she's getting a
surrogate. Ms. Mentor feels the whole process is fraudulent and shameful.
What to do?
Ms.Mentor suggests that Honest John talk with Bertha
about what a dissertation truly involves. (He may include Ms. Mentor's column
on "Should You Aim to Be a Professor?") No one seems to have told
Bertha that it is an individual's search for a small corner of truth and that
it should teach her how to organize and write up her findings.
Moreover, Bertha may not know the facts of the job
market in her field. If she aims to be a professor but is a mediocre writer,
her chances of being hired and tenured -- especially if there's age
discrimination -- may be practically nil. There are better investments.
But if Bertha insists on keeping her editor, and her
committee and the director of doctoral studies all collude in allowing this
academic fraud to take place, what should Honest John do?
He should resign from the committee, Ms. Mentor
believes: Why spend his energies with dishonest people? He will have exhausted
"internal remedies" -- ways to complain within the university -- and
it is a melancholy truth that most bureaucracies prefer coverups to
confrontations. If there are no channels to go through, Honest John may as
well create his own -- by contacting the accrediting agencies, professional
organizations in the field, and anyone else who might be interested.
Continued in the article.
November 3, 2003 reply from David R. Fordham [fordhadr@JMU.EDU]
Bob, there are two very different questions being
addressed here.
The first deals with the revelation that “her
dissertation research is ambiguous, and the quality of her proposal is
substandard”.
The editing of a manuscript is a completely different
issue.
The ambiguity of the research and the flaws with the
proposal should be addressed far more forcefully than the editing issue!
Care should be used to ensure that the editor simply
edits (corrects grammar, tense, case, person, etc.), and isn’t responsible
for the creation of ideas. But if the editor is a professional editor who
understands the scope of his/her job, I don’t see why editing should be an
issue for anyone, unless the purpose of the dissertation exercise is to
evaluate the person’s mastery of the minutiae of the English language (in
which case the editor is indeed inappropriate).
Talk about picking your battles … I’d be a lot
more upset about ambiguous research than whether someone corrected her
sentence structure. I believe the whistle-blower needs to take a closer look
at his/her priorities. A flag needs to be raised, but about the more important
of the two issues.
David R. Fordham
PBGH Faculty Fellow
James Madison University
Assessment in Math and Science: What's the Point? ---
http://www.learner.org/resources/series93.html
Rubrics in Academia --- https://en.wikipedia.org/wiki/Rubric_(academic)
"Assessing, Without Tests," by Paul Fain, Inside Higher Ed,
February 17, 2016 ---
https://www.insidehighered.com/news/2016/02/17/survey-finds-increased-use-learning-outcomes-measures-decline-standardized-tests?utm_source=Inside+Higher+Ed&utm_campaign=60a80c3a41-DNU20160217&utm_medium=email&utm_term=0_1fcbc04421-60a80c3a41-197565045
Jensen Comment
Testing becomes more effective for grading and licensing purposes as class sizes
increase. It's less effective when hands on experience is a larger part of
competency evaluation. For example, in the final stages of competency evaluation
in neurosurgery testing becomes less important than expert evaluation of
surgeries being performed in operating rooms. I want my brain surgeon to be much
more than a good test taker. Testing is more cost effective when assigning
academic credit for a MOOC mathematics course taken by over 5,000 students.
One thing to keep in mind is that testing serves a much larger purpose than
grading the amount of learning. Testing is a huge motivator as evidenced
by how students work so much harder to learn just prior to being tested.
Some types of testing are also great integrators of multiple facets of
a course. This is one justification of having comprehensive final examinations.
Testing also can overcome racial, ethnic, and cultural biases. This is the
justification, for example, for having licensing examinations like CPA exam
examinations, BAR examinations, nursing examinations, etc. be color blind in
terms of race, ethnic, and cultural bias. This is also one of the
justifications (good or bad) of taking grading out of the jurisdiction of
teachers. Competency examinations also serve a purpose of giving credit for
learning no matter of how or where the subject matter is learned. Years ago
people could take final examinations at the University of Chicago without ever
having attended classes in a course ---
http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/assess.htm
How to Mislead With Statistics of Merit Scholars: "Mom, Please Get
Me Out of South Dakota!"
Probabilities of Being a Merit Scholar Vary Intentionally With Geography:
The Odds are Higher in East St. Louis or Cactus Gulch, Nevada
"Not-So-National Merit," by Ian Ayres, Freakonomics, April 4, 2014 ---
http://freakonomics.com/2014/04/04/not-so-national-merit/
Last December, thousands of high school sophomores
and juniors learned the results of the 2013 Preliminary SAT (PSAT) test.
The juniors’ test scores will be used to determine whether they qualify as
semifinalists for the prestigious National Merit Scholarship, which in turn
makes them eligible for a host of automatic
college scholarships. (Sophomores take the test
just as practice.)
The juniors will have to wait to find out for sure
if they qualify until September, just before they begin submitting
applications to colleges across the country. But it is fairly
straightforward to predict, based on their scores and last year’s cutoffs,
whether they will qualify as semifinalists.
Many students would be
surprised to learn that qualification depends not only
on how high they score, but also on where they go to school.
The National Merit Scholarship Corporation (NMSC) sets different qualifying
cutoffs for each state to “ensure
that academically talented young people from all parts of the United States
are included in this talent pool.” They have not
disclosed any specific criteria for setting the state cutoffs.
A high school student’s chances of receiving the
award can depend crucially on his or her state of residence. Last year,
students in West Virginia needed only a 203 to qualify as a semifinalist
(scores range from 60-240), while students from Texas needed a 219 and
students from Washington, D.C. a 224. Nationally, the West Virginia score
was in the 97thpercentile of scores, while the Washington DC
score was at the 99.5th percentile based on a mean score of 143
and a standard deviation of 31.
I’ve crudely estimated that because of this state
cutoff discrimination, approximately 15% of students (about 2,400 students a
year) who are awarded semifinalist status have lower scores than
other students who were not semifinalists merely due to their geographic
location. Troublesomely, I also found that states with larger minority
populations tend to have higher cutoffs.
Instead of just complaining, I have partnered with
an extraordinary high-school sophomore from New Jersey named India
Unger-Harquail to try to do something about it.
We’ve just launched a new website, AcadiumScholar.org.
You can go to site, enter a score, and it will quickly
tell you the states where your score would have qualified you as an NMSC
semifinalist.
But wait, there’s more. The site also offers to
certify qualified students based on a national standard of merit. If you
represent and warrant to us that you received a PSAT score meeting the
minimum cutoff in at least one state (and you give us the opportunity to try
to verify the accuracy of your score with NMSC), we’ll give you the right to
describe yourself as an “Acadium Scholar.” We’ve separately applied to the
USPTO to registrar that phrase as a certification mark
(in parallel fashion to my earlier “fair
employment mark”).
Instead of the yes-or-no signal offered by the NMSC,
we’ll also certify students based on the number of states in which they
would have qualified as semifinalists. For example, a student who scored a
211 could be certified to describe herself as a “19-state Acadium Scholar.”
Our certification allows:
· A student from a strong cutoff-state,
like Texas, who scores a 218 (just missing the Lone Star qualifying
cutoff of 219) to say nonetheless that he’s a 41-state Acadium Scholar.
· A student from a weak cutoff state,
like North Dakota, who scores an extraordinary 235 on the exam to say
that she is a 50-state Acadium Scholar.
We’re even letting sophomores use their scores to
certify so that all the pressure isn’t on junior year. There are also some
sophomores who may have scored ten points better in their sophomore than
their junior year. Now those students can certify as Acadium Scholars based
on their higher scores.
Continued in article
Jensen Comment
Many elite colleges in search of diversity in geography as well as race and
religion admit to varying admission standards for geography. It's harder to get
into Harvard from Massachusetts than it is from Wyoming or Alaska.
Bob Jensen's threads on assessments ---
http://faculty.trinity.edu/rjensen/Assess.htm
Culture matters enormously. Do better analytics lead
managers to "improve" or "remove" the measurably underperforming? Are analytics
internally marketed and perceived as diagnostics for helping people and
processes perform "better"? Or do they identify the productivity pathogens that
must quickly and cost-effectively be organizationally excised? What I've
observed is that many organizations have invested more thought into acquiring
analytic capabilities than confronting the accountability crises they may
create.
"The Real Reason Organizations Resist Analytics," by Michael Schrage,
Harvard Business Review Blog, January 29, 2013 ---
Click Here
http://blogs.hbr.org/schrage/2013/01/the-real-reason-organizations.html?referral=00563&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date
While discussing a Harvard colleague's world-class
work on how big data and analytics transform public sector effectiveness, I
couldn't help but ask: How many public school systems had reached out to him
for advice?
His answer surprised. "I can't think of any," he
said. "I guess some organizations are more interested in accountability than
others."
Exactly. Enterprise politics and culture suggest
analytics' impact is less about measuring existing performance than creating
new accountability. Managements may want to dramatically improve
productivity but they're decidedly mixed about comparably increasing their
accountability.
Accountability is often the unhappy byproduct
rather than desirable outcome of innovative analytics. Greater
accountability makes people nervous.
That's not unreasonable. Look at the
vicious politics and debate in New York and other
cities over analytics' role in assessing public school teacher performance.
The teachers' union argues the metrics are an unfair and pseudo-scientific
tool to justify firings. Analytics' champions insist that
the transparency and insight these metrics provide are essential for
determining classroom quality and outcomes. The
arguments over numbers are really fights over accountability and its
consequences.
At one global technology services firm, salespeople
grew furious with a CRM system whose new analytics effectively held them
accountable for pricing and promotion practices they thought undermined
their key account relationships. The sophisticated and near-real-time
analytics created the worst of both worlds for them: greater accountability
with less flexibility and influence.
The evolving marriage of big data to analytics
increasingly leads to a phenomenon I'd describe as "accountability creep" —
the technocratic counterpart to military "mission creep." The more data
organizations gather from more sources and algorithmically analyze, the more
individuals, managers and executives become accountable for any unpleasant
surprises and/or inefficiencies that emerge.
For example, an Asia-based supply chain manager can
discover that the remarkably inexpensive subassembly he's successfully
procured typically leads to the most complex, time-consuming and expensive
in-field repairs. Of course, engineering design and test should be held
accountable, but more sophisticated data-driven analytics makes the
cost-driven, compliance-oriented supply chain employee culpable, as well.
This helps explain why, when working with
organizations implementing big data initiatives and/or analytics, I've
observed the most serious obstacles tend to have less to do with real
quantitative or technical competence than perceived professional
vulnerability. The more managements learn about what analytics might mean,
the more they fear that the business benefits may be overshadowed by the
risk of weakness, dysfunction and incompetence exposed.
Culture matters enormously. Do better analytics
lead managers to "improve" or "remove" the measurably underperforming? Are
analytics internally marketed and perceived as diagnostics for helping
people and processes perform "better"? Or do they identify the productivity
pathogens that must quickly and cost-effectively be organizationally
excised? What I've observed is that many organizations have invested more
thought into acquiring analytic capabilities than confronting the
accountability crises they may create.
For at least a few organizations, that's led to
"accountability for thee but not for me" investment. Executives use
analytics to impose greater accountability upon their subordinates.
Analytics become a medium and mechanism for centralizing and consolidating
power. Accountability flows up from the bottom; authority flows down from
the top.
Continued in article
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Jensen Comment
Another huge problem in big data analytics is that the databases cannot possibly
answer some of the most interesting questions. For example, often they reveal
only correlations without any data regarding causality.
A Recent Essay
"How Non-Scientific Granulation Can Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
By Bob Jensen
This essay takes off from the following quotation:
A recent accountics science study suggests
that audit firm scandal with respect to someone else's audit may be a reason
for changing auditors.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas
J. Skinner and Suraj Srinivasan, The Accounting Review, September
2012, Vol. 87, No. 5, pp. 1737-1765.
Our conclusions are subject to two caveats.
First, we find that clients switched away from ChuoAoyama in large numbers
in Spring 2006, just after Japanese regulators announced the two-month
suspension and PwC formed Aarata. While we interpret these events as being a
clear and undeniable signal of audit-quality problems at ChuoAoyama, we
cannot know for sure what drove these switches (emphasis added).
It is possible that the suspension caused firms to switch auditors for
reasons unrelated to audit quality. Second, our analysis presumes that audit
quality is important to Japanese companies. While we believe this to be the
case, especially over the past two decades as Japanese capital markets have
evolved to be more like their Western counterparts, it is possible that
audit quality is, in general, less important in Japan (emphasis
added) .
Bob Jensen's threads on controversies in education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
The New GMAT
The New GMAT: Part 1
"The New GMAT: Questions for a Data-Rich World,: by: Alison Damast, Business
Week, May 14, 2012 ---
http://www.businessweek.com/articles/2012-05-14/the-new-gmat-questions-for-a-data-rich-world
Editor’s Note: This is the first in a three-part series on the
new GMAT, which makes its official debut on June 5. In this article, we
examine the conceptual building blocks for the test’s new Integrated
Reasoning section.
On a blustery day in February 2009, a group of nine deans and faculty
members from U.S. and European business schools huddled together in a
conference room in McLean, Va., at the Graduate Management Admission
Council’s headquarters. They were there to discuss what would be some of the
most radical changes to the Graduate Management Admission Test (GMAT) in the
exam’s nearly 60-year history.
Luis Palencia, then an associate dean at Spain’s
IESE Business School, was eager to press his case
for the skills he thought today’s MBAs needed to have at their fingertips.
Business students must be able to nimbly interpret and play with data in
graphs, spreadsheets, and charts, using the information to draw swift but
informed conclusions, he told his colleagues.
“The GMAT was not becoming obsolete, but it was
failing to identify the skills which might be important to warrant the
success of our future candidates,” he said in a phone interview from
Barcelona three years later.
By the time the faculty advisory group commenced
two days later, they had come up with a set of recommendations that would
serve as a framework for what would eventually become the new “Integrated
Reasoning” section of the
Next
Generation GMAT, which has been in beta testing
for two years and will be administered to applicants for the first time on
June 5.
Until now, the B-school entrance exam, which was
administered 258,192 times worldwide in 2011, was made up of verbal,
quantitative, and two writing sections. The new section, which replaces one
of the writing sections, is
the biggest change to the GMAT since the shift to
computer-adaptive testing 15 years ago, and one that has been in the works
since 2006, when GMAC first decided to revisit the exam and the skills it
was testing, says Dave Wilson, president and chief executive officer of
GMAC.
“At that time, we got a pretty good handle that the
GMAT was working, but we wanted to know if there was anything that we
weren’t measuring that would provide real value to the schools,” Wilson
says.
It turned out there was a whole slew of new skills
business school faculty believed could be added to the exam. The
recommendations put forth by Palencia and the rest of the committee that
convened in 2009 served as the conceptual building blocks for what a new
section might look like. Later that year, GMAC surveyed nearly 740 faculty
members around the world, from business professors to admissions officers,
who agreed with many of the committee’s findings and suggested that students
needed certain proficiencies to succeed in today’s technologically advanced,
data-driven workplaces.
For example, they gave “high importance” ratings to
skills such as synthesizing data, evaluating data from different sources,
and organizing and manipulating it to solve multiple, interrelated problems,
according to the Next Generation GMAC Skills Survey report.
Those are all examples of skills that can now be
found on the 30-minute Integrated Reasoning section, which GMAC has spent
$12 million developing over the past few years, Wilson says. It will have 12
questions and include pie charts, graphs, diagrams, and data tables. The
section employs four different types of questions that will allow students
to flex their analytical muscles.
Continued in article
"The New GMAT: Thanks, But No Thanks," Business Week, May 31,
2012 ---
http://www.businessweek.com/articles/2012-05-31/the-new-gmat-thanks-but-no-thanks
The future can be scary, especially if you’re
headed to B-school. And if you haven’t taken the GMAT yet, the future can be
downright terrifying. On June 2 the old GMAT will be consigned to the
dustbin of history and replaced on June 5 (after a two-day blackout period)
with a
new version of the B-school entrance test. The new
and improved exam replaces one of the existing writing sections with a new
integrated reasoning section that apparently is giving test takers the night
sweats.
There’s been a mad rush on the part of students to
register for the test before June 5. The Graduate Management Admission
Council, which publishes the exam, isn’t saying exactly how mad, but if you
charted test registrations it would look a lot like a bell curve. “We
expected volumes to go up in April and May, and they have,” wrote GMAC
spokesman Bob Ludwig in an e-mail. “Quite significantly.”
What that means for test takers is that, according
to test-prep companies, registering for the GMAT just got a lot more
difficult, especially if you’ve waited until the last minute. To take the
test before the big changeover, some students are driving an hour or two out
of their way to less popular testing centers and taking the test mid-week
rather than on the weekend.
Andrew Mitchell, director of pre-business programs
at Kaplan Test Prep, says a surge in test registrations before substantive
changes is not unusual. In a recent survey, 38 percent of Kaplan GMAT
students said they were trying to beat the June 2 deadline and take the old
test. Many of them hadn’t even seen the new integrated reasoning questions
yet—they were worried about the new section, sight unseen.
Test takers have now had several months to eyeball
the new section using
sample questions supplied by GMAC and test-prep
materials. Mitchell says students equate the new integrated reasoning
section’s level of difficulty with that of the GMAT’s data sufficiency
questions—some of the test’s toughest—which ask test takers to determine if
the information supplied is enough to answer the question.
“A business school student is generally going to
want to take the easier path if there’s no disadvantage to doing so,”
Mitchell says. “Integrated reasoning is all about working with data. Quant
data is displayed graphically, and that’s intimidating to a lot of
people. It makes sense that people would be apprehensive.”
But it’s not like prospective MBAs were without
options. It’s worth noting that the usual prescription for apprehension when
it comes to the GMAT—hitting the books—was and is available for anyone
contemplating the new test. Kaplan test-prep books that went on sale in
January have material related to integrated reasoning, and integrated
reasoning sections have been added to five of Kaplan’s nine full-length
practice tests.
At Veritas Prep, the number of website visitors
using “integrated reasoning” as a search term has doubled every month since
January. “We’re definitely seeing a lot of traffic,” says Brian Galvin,
director of academic programs at Veritas. “It’s an exponential increase in
interest.”
Continued in article
Head Start Programs
It is now 45 years later. We spend more than $7
billion providing Head Start to nearly 1 million children each year. And finally
there is indisputable evidence about the program's effectiveness, provided by
the Department of Health and Human Services: Head Start simply does not work.
"Time to Ax Public Programs That Don't Yield Results," Liberal
Columnist Joe Klein, Time Magazine, July 26, 2011, Page 27 ---
http://www.time.com/time/nation/article/0,8599,2081778,00.html
Barack Obama has been accused of "class warfare"
because he favors closing several tax loopholes — socialism for the wealthy
— as part of the deficit-cutting process. This is a curious charge: class
warfare seems to be a one-way street in American politics. Over the past 30
years, the superwealthy have waged far more effective warfare against the
poor and the middle class, via their tools in Congress, than the other way
around. How else can one explain the fact that the oil companies, despite
elephantine profits, are still subsidized by the federal government? How
else can one explain the fact that hedge-fund managers pay lower tax rates
than their file clerks? Or that farm subsidies originally meant for family
farmers go to huge corporations that hardly need the help?
Actually, there is an additional explanation.
Conservatives, like liberals, routinely take advantage of a structural flaw
in the modern welfare state: there is no creative destruction when it comes
to government programs. Both "liberal" and "conservative" subsidies linger
in perpetuity, sometimes metastasizing into embarrassing giveaways. Even the
best-intentioned programs are allowed to languish in waste and incompetence.
Take, for example, the famed early-education program called Head Start.
(See more about the Head Start reform process.)
The idea is, as Newt Gingrich might say, simple
liberal social engineering. You take the million or so poorest 3- and
4-year-old children and give them a leg up on socialization and education by
providing preschool for them; if it works, it saves money in the long run by
producing fewer criminals and welfare recipients — and more productive
citizens. Indeed, Head Start did work well in several pilot programs
carefully run by professionals in the 1960s. And so it was "taken to scale,"
as the wonks say, as part of Lyndon Johnson's War on Poverty.
It is now 45 years later. We spend more than $7
billion providing Head Start to nearly 1 million children each year. And
finally there is indisputable evidence about the program's effectiveness,
provided by the Department of Health and Human Services: Head Start simply
does not work.
According to the Head Start Impact Study, which was
quite comprehensive, the positive effects of the program were minimal and
vanished by the end of first grade. Head Start graduates performed about the
same as students of similar income and social status who were not part of
the program. These results were so shocking that the HHS team sat on them
for several years, according to Russ Whitehurst of the Brookings
Institution, who said, "I guess they were trying to rerun the data to see if
they could come up with anything positive. They couldn't."
(See how California's budget woes will hurt the state's social services.)
The Head Start situation is a classic among
government-run social programs. Why do so many succeed as pilots and fail
when taken to scale? In this case, the answer is not particularly difficult
to unravel. It begins with a question: Why is Head Start an HHS program and
not run by the Department of Education? The answer: Because it is a last
vestige of Johnson's War on Poverty, which was run out of the old Department
of Health, Education and Welfare. The War on Poverty attempted to rebuild
poor communities from the bottom up, using local agencies called community
action programs. These outfits soon proved slovenly; often they were little
more than patronage troughs for local Democratic Party honchos — and,
remarkably, to this day, they remain the primary dispensers of Head Start
funds. As such, they are far more adept at dispensing make-work jobs than
mastering the subtle nuances of early education. "The argument that Head
Start opponents make is that it is a jobs program," a senior Obama
Administration official told me, "and sadly, there is something to that."
Continued in article
Assessment in Math and Science:
What's the Point? ---
http://www.learner.org/resources/series93.html
Assessment by Ranking May Be a Bad Idea
An interesting article on forced performance
rankings (might be read as grading) ---
Olympics 1, AIG
0: Why Forced Ranking Is a Bad Idea ---
Click Here
http://blogs.hbr.org/bregman/2010/02/olympics-1-aig-0-why-forced-ra.html?cm_mmc=npv-_-DAILY_ALERT-_-AWEBER-_-DATE
Jensen Comment
I think some readers fail to see the importance of just what the title means
when it reads “Olympics 1, AIG 0."
They're apt to look for some relationship between the Olympics and AIG. There
may well be some very obscure relationship, but that’s not the point.
February 19, 2010 reply from David
Albrecht [albrecht@PROFALBRECHT.COM]
Bob,
This is one of the most interesting stories you've
passed along in quite a while. I especially like the part of the article
that says once a ranking criterion is selected, all other tasks an employee
might perform (such as learning/training) are counter productive. I think
this is a situation very present in academe. GPA becomes an important metric
for students quest for either employment or graduate school after graduation
with a BSBA. If GPA is the primary criterion for awarding entry and
scholarships, than any activity a student takes that could result in a lower
grade is to be avoided at all costs.
Moreover, learning within a course is a
multivariate activity. I can think of memorization, application, affectation
and personal growth. If a professor is untrained in education (and most biz
profs are), professor selection of inappropriate grading criteria can place
a huge cost on students.
David Albrecht
February 19, 2010 reply from James R.
Martin/University of South Florida
[jmartin@MAAW.INFO] (I combined two replies)
According to Deming:
Annual reviews and ranking employees indicates the absence of a knowledge of
variation and an absence of an understanding of the system. A manager who
understands variation would not rank people because he or she would
understand that ranking people merely ranks the effect of the system on the
people. This causes tampering & destroys motivation and teamwork.
See
http://maaw.info/DemingMain.htm for Deming's theory of management.
This hit one of my buttons.
The point: There is nothing wrong with ranking people in games. Some one
wins and someone losses. But life, business, and education are not games.
Everyone can win if they cooperate and work together. Ranking people
prevents them from doing that and causes winners and losers in the short
run. In the long run, everyone losses.
February 20, 2010 reply from Francine McKenna
[retheauditors@GMAIL.COM]
Bob/Dave
Agree wholeheartedly. I've written a lot about forced
ranking for partners on down and the negative effect it's had on
professionalism and morale in the Big 4. They've followed their big ideal
client GE into the abyss.
http://retheauditors.com/2009/11/05/live-our-values-demonstrate-our-behaviors-support-our-strategy/
http://retheauditors.com/2009/08/12/goingconcern-ratings-raises-and-promotions-forced-ranking-in-the-big-4/
http://retheauditors.com/2007/06/26/when-is-a-layoff-not-a-layoff/
Francine
February 19, 2010 reply from Bob Jensen
And I forgot to cringe properly when remembering all the times I thought
I was making the job easier when I had students rank each other’s term
papers --- because I thought ordinal-scale ranking would be easier for them
than assigning a letter grade or ratio-scaled score. Ratio scales differ
from interval scales by having a common zero point, which is what makes
correlations different from covariances.
In small graduate classes I thought it would be a learning exercise for
students to both read each others’ papers and rank them. Students were asked
not to rank their own papers in the set of submitted rankings.
However, for grading purposes I graded the papers before I read the
student rankings. I reserved the right to only mark a paper’s grade upward
after reading the student commentaries that accompanied their rankings. I
suspect I would’ve graded downward as well if plagiarism was detected by
student rankers, but not once in my career did a student ranker ever
disclose a case of plagiarism.
Still, I’m now wondering about the propriety of making students rank
papers.
Bob Jensen
If a student doesn’t come to school,” he continued,
“how can you justify passing that kid?
Fernanda Santos
"Bronx School’s Top Ranking Stirs Wider Doubts About Rating System,"
by Fernanda Santos, The New York Times, January 20, 2011 ---
http://www.nytimes.com/2011/01/21/education/21grades.html?_r=1&hpw
One of the trademarks of New York City’s school
accountability system is an equation that assigns every school a letter
grade, A through F, based on a numerical score from 1 to 100.
Bronx School’s Top Ranking Stirs Wider Doubts About
Rating System By FERNANDA SANTOS Published: January 20, 2011
Recommend Twitter Sign In to E-Mail Print Reprints
Share
One of the trademarks of New York City’s school
accountability system is an equation that assigns every school a letter
grade, A through F, based on a numerical score from 1 to 100. Enlarge This
Image Marcus Yam for The New York Times
Lynn Passarella, facing camera, the principal of
the Theater Arts Production Company School, outside the school on Thursday.
She declined to comment on the allegations about her school’s grading
practices.
A parent pulling up the latest report card for the
Theater Arts Production Company School in the Bronx would find that it
earned the score of 106.3 (including extra credit).
But that very empiric-sounding number, which was
the highest of any high school in the city, is based in part on subjective
measures like “academic expectations” and “engagement,” as measured by
voluntary parent, teacher and student surveys.
And, according to some teachers at the school, even
the more tangible factors in the score — graduation rates and credits earned
by students — were not to be taken at face value. The school has a policy
that no student who showed up for class should fail, and even some who
missed many days of school were still allowed to pass and graduate.
The Department of Education, which revealed on
Wednesday that it was investigating grading practices at the school, says
that it has a team devoted to analyzing school statistics every year and
looking for red flags like abnormal increases in student scores or dropout
rates. But a department official said that nothing in its data had raised
suspicions about the school, known as Tapco, until a whistle-blower filed a
complaint in October.
Still, in a data-driven system where letter grades
can determine a school’s fate, one big question looms over the
investigation: If the allegations turn out to be true, are they an exception
or a sign of a major fault in the school accountability system?
“The D.O.E. has absolutely created a climate for
these types of scandals to happen,” Michael Mulgrew, the president of the
teachers’ union, said in an interview. “Their culture of ‘measure everything
and question nothing a principal tells you’ makes it hard to figure out
what’s real and what’s not real inside a school.”
There are many gradations of impropriety, and it is
unclear if any of them apply to Tapco, which has about 500 students and also
includes a middle school. The school’s teacher handbook states that no
student should fail a class if he or she regularly attends, and that
students who miss work should be given “multiple opportunities for student
success and work revision.”
Current and former teachers at the school said that
even students who were regularly absent were given passing grades, in some
cases with course credits granted by the principal without a teacher’s
knowledge. Some students’ records showed credits for courses the school did
not offer.
The investigation over the irregularities at Tapco,
which began in October, also include allegations that the school’s
principal, Lynn Passarella, manipulated teacher and parent surveys, which
represent 10 of the 100 points in a school’s score. Graduation rates,
passing rates on Regents exams and earned credits constitute most of the
score.
Ms. Passarella declined to comment on the
allegations.
A spokesman for the Education Department, Matthew
Mittenthal, said: “We take every allegation of misconduct seriously, and
hope that the public can reserve judgment until the investigation is
complete.”
Sometimes, the analysts who pore over the data
uncover serious problems. Last year, the Education Department lowered the
overall scores of three high schools. At Jamaica High School in Queens, the
department discovered that the school had improperly granted credit to some
transfer students. At John F. Kennedy High School in the Bronx and W. H.
Maxwell Career and Technical Education High School in Brooklyn,
administrators could not provide documentation to explain why some students
had left the schools.
Since 2008, at least four principals and assistant
principals have been reprimanded — two retired, one served a 30-day unpaid
suspension and another paid a $6,500 fine — on charges that included
tampering with tests.
Principals can get as much as $25,000 in bonuses if
their schools meet or exceed performance targets, and some experts are
skeptical that the department’s system of checks and balances is as
trustworthy as it should be, particularly when money is at stake.
Tapco’s administrators got a bonus once, for the
2008-9 school year, when the high school’s overall score was 85.8, which
earned it an A. (The middle school scored 73.) Ms. Passarella received
$7,000, while her assistant principals got $3,500 each, according to the
Education Department. (Administrator bonuses for 2009-10 performance have
not been doled out.)
“There’s an inherent temptation towards corruption
when you create a situation where there are rewards for things like higher
test scores or favorable surveys,” said Sol Stern, an education researcher
at the Manhattan Institute, a conservative research group. “It’s an
invitation to cheating.”
One mother, Cathy Joyner, whose daughter, Sapphire
Connor, is a junior, said the school was excellent, adding that “the
children are respectful” and that the school was “concentrating on their
talents.”
But one teacher, who spoke on condition of
anonymity because he said he feared for his job, gave a different account.
For teachers who do not do what the principal wants, the teacher said, “it’s
difficult to get tenure.”
“If a student doesn’t come to school,” he
continued, “how can you justify passing that kid?"
Wow: 97% of Elementary NYC Public Students Get A or B Grades --- There
must be higher IQ in the water!
"City Schools May Get Fewer A’s," by Jennifer Medina, The New York Times,
January 28, 2010 ---
http://www.nytimes.com/2010/01/30/education/30grades.html?hpw
Michael Mulgrew, the president of the United
Federation of Teachers, criticized the decision to reduce the number of
schools that receive top grades.
Continued in article
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
"Colleges (i.e., prestigious colleges) With Lenient Grades Get Six
Times as Many Grads Into B-School (and jobs)," by Louis Lavelle,
Bloomberg Businessweek, July 30, 2013 ---
http://www.businessweek.com/articles/2013-07-30/colleges-with-lenient-grades-get-six-times-as-many-grads-into-b-school
Link to the Study ---
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0069258
Abstract
When explaining others' behaviors, achievements, and failures, it is common
for people to attribute too much influence to disposition and too little
influence to structural and situational factors. We examine whether this
tendency leads even experienced professionals to make systematic mistakes in
their selection decisions, favoring alumni from academic institutions with
high grade distributions and employees from forgiving business environments.
We find that candidates benefiting from favorable situations are more likely
to be admitted and promoted than their equivalently skilled peers. The
results suggest that decision-makers take high nominal performance as
evidence of high ability and do not discount it by the ease with which it
was achieved. These results clarify our understanding of the correspondence
bias using evidence from both archival studies and experiments with
experienced professionals. We discuss implications for both admissions and
personnel selection practices.
. . .
General Discussion
Many studies in the social psychology and
organizational behavior literatures have found that people tend to attribute
too much influence to disposition and too little influence to situational
factors impinging on the actor when explaining others' behaviors,
achievements, and failures. This common tendency, labeled the correspondence
bias or the fundamental attribution error, has been shown to be robust
across a variety of contexts and situations. Yet, to date, most of the
evidence about this bias comes from laboratory experiments with college
students as participants, and its implications for field settings and
organizational outcomes are seldom examined. Using data from both the
experimental laboratory and the field, we extend prior research by
investigating whether this tendency leads experienced professionals to make
systematic mistakes in their selection decisions, favoring alumni from
academic institutions with higher grade distributions and employees working
in favorable business climates. Our results indicate that candidates who
have demonstrated high performance thanks to favorable situations are more
likely to be rated highly and selected. Across all our studies, the results
suggest that experts take high performance as evidence of high ability and
do not sufficiently discount it by the ease with which that performance was
achieved. High grades are easier to achieve in an environment where the
average is high and so are less indicative of high performance than are the
same grades that were earned from an institution with lower grades on
average. Sky-high on-time percentages should be less impressive at an
airport that was running well before the manager got there. Although we
focused on two selection scenarios, we believe the results speak to other
selection and evaluation problems.
Indeed, we see consistent evidence of situation
neglect in contexts where political and business leaders are credited with
performance that derives directly from stochastic economic factors. Voters
face a Lewinian dilemma when they evaluate the performance of incumbent
politicians running for re-election. They should reward politicians who
create positive change for their constituencies while considering what
portion of those changes were due to lucky or exogenous factors. Wolfers
[41] finds that
voters, like our admissions professionals and executives, favor politicians
that had the good luck to work under favorable conditions. Voters are more
likely to reelect incumbents after terms marked by positive national
economic trends or (in the case of oil-rich states) high oil prices. CEOs
also benefit from fortuitous economic conditions for which they are not
responsible. Bertrand and Mullainathan
[42] present evidence
that CEO compensation is driven to equal degrees by their management and the
uncontrollable economic conditions in which they managed. Stakeholders in
these cases have strong incentives to reward leaders who add value above the
vagaries of the economy, but they seem blind to the difference.
It is often the case that structural and
situational factors are the most powerful influences on behavior. Within
organizations, for example, it is easier to succeed in some jobs than in
others
[43]. Sometimes
people will achieve positive outcomes simply because of a beneficent
environment. It is easier to achieve success as a manager when your team is
strong than when your team is weak. Likewise, it is easier to obtain a
strong education in an excellent private school than in an under-funded
public school. And it is easier to achieve high grades at schools where
higher grades are the norm. So it would be a mistake to neglect situational
effects on performance, but that is what our data suggest that even experts
and professionals tend to do.
Are we always doomed to make erroneous
correspondent inferences? Evidence suggests not; the bias is subject to a
number of moderating factors. These are useful to consider both because they
provide clues about the psychological mechanisms at work and because they
suggest potential debiasing treatments. For instance, when people are
stressed, distracted, or busy, they are more likely to fall victim to the
correspondence bias
[44]. Those with
greater capacity for reflective thought, as measured by need for cognition,
are less likely to show the bias
[45]. When people
feel accountable to others, they are less likely to show the bias
[46]. When people are
in good moods, they appear more likely show the bias
[47]. And some
collectivistic cultures may be less vulnerable to the correspondence bias
than individualistic ones
[48],
[49].
Organizations often adopt practices because they
are legitimate, popular, or easy to justify
[50],
[51]. That may help
explain why we observed such consistency in admissions policies in
neglecting to consider differences in grade distributions between
institutions. This sort of consistency in organizational “best” practices
can create incentives for individuals to play along, despite their
imperfections. Indeed, it is even conceivable that cultural or linguistic
norms can make it easier for individuals to follow decision norms that are
more easily understood by or explained to others. On the other hand, it is
reasonable to assume that finding a better system to evaluate applicants
would improve admissions decisions, allowing the schools that do it to
identify strong candidates that other schools neglect. The Oakland Athletics
baseball team did just this when it pioneered a new statistical approach to
identifying promising baseball players to recruit
[52]. Their success
has since been emulated by other teams, changing the way baseball's talent
scouts pick players. However, the problem for admissions departments may be
more complicated because explicitly tarring some institutions as
lenient-grading is likely to elicit energetic protests if they ever find out
about it
[53].
It is common in organizations for the abilities of
an individual, a department, or a division to be shrouded in complicating or
confounding influences that make them difficult to detect or measure
[54]. Indeed, as much
as ratings systems like grades and performance metrics like on-time
percentages can help clarify standards for evaluation, they can also be used
to obscure performance
[55]. Variation in
grading standards between institutions obscures the value of using grades to
measure student performance. It is probably in the interest of
lenient-grading institutions to hide the degree of their leniency.
Consistent with this motive, recent years have seen changes in the
disclosure that institutions are willing to make
[56]. Fewer academic
institutions are willing to disclose average grading data or class rankings
for their students or alumni. When we contacted institutions to inquire
regarding average grades elite, expensive, private institutions – those with
the highest average grades – were most likely to decline to disclose the
information.
Organizational Image, Legitimacy, and Stakeholder
Appraisals
The strategic use of scoring and assessment metrics
has implications at the organization level because of the way that
institutions compete. Scott and Lane
[57] advanced a
theory of organizational image in which stakeholders (both members as well
as outside audiences) play a key role in shaping the organization's image by
making legitimacy appraisals that can counterbalance the organization's
attempts at image management. This model is built on the dual premises that
organizations and their members derive personal and economic benefits from
promoting a positive image
[58],
[59], but that
salient audiences have a role in validating that image
[60],
[61]. These forces
form an equilibrium that balances the organization's incentives for an
unbounded positive spin with the utility gained by stakeholders from an
image grounded in reality. Scott and Lane
[57] term the
specific mechanism by which this equilibrium is reached reflected
stakeholder appraisals. In the present paper we have investigated a
setting in which stakeholders may have difficulty judging the
appropriateness of image-relevant information which could then threaten the
stability of the reflected stakeholder appraisal equilibrium.
In the context of higher education, graduating
students are among the primary interfaces through which employers, graduate
schools, and communities interact with undergraduate institutions. Their
reputation in the form of grades contributes to the reputation
[62] of the
organization. As such, undergraduate institutions have an incentive to
promote an image of intelligence and achievement to these outside audiences
by maintaining a relatively high grade distribution. Given the tremendous
value of being able to place alumni in better graduate schools and in better
jobs, universities cannot be expected to go too far in seeking to curtail
grade inflation. For example, universities are unlikely to implement
meaningful institutional changes such as replacing grades with percentile
rankings. Instead, we should expect academic institutions to pay lip service
to the importance of high academic standards while at the same time avoiding
publicizing average grade distributions and avoiding reporting class rank
data on their students.
Do we see unchecked escalation of grade
distributions by a market full of organizations unconstrained by the
critical feedback from shareholders? Of course, there are multiple
mechanisms supporting a moderate equilibrium even without functioning
shareholder criticism of the type we have described, but some data suggest
grade inflation is a prolonged and significant trend in U.S. Education
[6]. More troubling
are anecdotal reports of institutions manipulating their grade distribution
with the publicly expressed intent of influencing the selection decisions of
hiring firms
[63]. Clearly, these
institutions are anticipating that employers will not sufficiently discount
the grades of their alumni to eliminate the advantage their inflated grades
will confer.
Limitations and Directions for Future Research
Our studies are subject to several important
limitations. First, the sample used in our first study was relatively small
due to the size of the admissions department that participated, even though
the results were highly significant. In addition, the first and second
studies employed hypothetical decisions, which may have limited validity as
a model of fully consequential and incentivized decision making. Future
research could benefit from a more qualitative research approach to
investigate how admissions and promotion decisions are made by various
organizations. As for Study 3, there are many variables (such as variations
in average GPA by discipline within a school) for which we did lacked
information and thus could not control in our analyses. These variables may
have important influences on admission decisions that are not captured in
the present research. Although these are important limitations, it is also
worth noting that the limitations differ across studies and yet the findings
are robust.
The conclusions implied by our results as well as
the limitations of our research bring forth some fruitful and interesting
possible avenues for future research. One interesting question is whether
other academic selection contexts would show the same patterns as business
school admissions decisions. Law schools, for instance, use the Law School
Admissions Council, an organization that (among other things) processes
applications for law schools and provides a service that gives schools a
sense of where a given applicant's GPA falls relative to other applicants
that the LSAC has seen from that same institution. The Graduate Management
Admissions Council does not process business school applications and so does
not provide an equivalent service for business schools. Does the LSAC's
assistance help law schools make better admissions decisions?
Similarly, future research could explore the
implications of the correspondence bias for promotions of business
professionals. Just as educational institutions vary with respect to the
ease of achieving high grades, so do companies, industries, and time periods
differ with respect to the ease of achieving profitability. There are some
industries (such as airlines) that are perennially plagued by losses and
whose firms have trouble maintaining profitability. There are other
industries (such as pharmaceuticals) that have seen more stable
profitability over time. And clearly there are changes over time in industry
conditions that drive profitability; for example, global oil prices drive
profitability among oil companies.
We believe an important avenue for further
investigation lies in continuing the study of the correspondence bias in
empirical settings with organizationally-relevant outcomes. A more thorough
understanding of the implications of this common bias for organizations
could be achieved by further investigating business decisions such as
promotions. There are also a multitude of other business decisions in which
a latent variable of interest is seen in the context of varying situational
pressures. Investment returns, sports achievements, and political success
are all domains in which judgments are vulnerable to the tendency to
insufficiently discount the influence of the situation. We expect that the
correspondence bias affects outcomes in these domains.
Our theory holds that a firm's good fortune (in the
form of greater profits) will be mistaken as evidence for the abilities of
its managers. If this is so, then we should more often see employees of
lucky firms being promoted than of unlucky firms
[64]. We would
expect, for instance, that pharmaceutical executives are more likely to be
hired away to head other firms than are airline executives. However, this
finding might be vulnerable to the critique that pharmaceutical executives
actually are more capable than are airline executives–after all, their firms
are more consistently profitable. Therefore, a better way to test this
prediction would be using an industry (such as oil) in which fortunes
fluctuate over time due to circumstances outside the control of any firm's
managers. Our prediction, then, would be that oil executives are more likely
to be hired away to head other firms when the oil industry is lucky (i.e.,
oil prices are high) than when the industry is unlucky (i.e., oil prices are
low).
Theoretical Contributions
Our results contribute to the literature on the
psychological process at work in comparative judgment, a literature that
stretches across psychology
[65], economics
[66], and
organizational behavior
[67]. In this paper,
we extend previous research by examining judgmental contexts in which expert
decision-makers are comparing outcomes that vary with respect to both
nominal performances and their ease. We should also point out that these
results are, in a number of ways, more dramatic than the results of previous
research showing biases in comparative judgment. Previous results have been
strongest when participants themselves are the focus of judgment
[65],
[68]. Biases in
comparative judgment shrink when people are comparing others, and shrink
still further when they have excellent information about performance by
those they are comparing
[69]. Biases
disappear when comparisons are made on a forced ranking scale
[70]. In this paper,
we have shown comparative judgments to be powerfully biased even when people
are evaluating others about whom they have complete information (as modeled
in Study 1), and even when the assessments (e.g., admission decisions) are
made on a forced distribution that prevent them from rating everyone as
better than everyone else.
Continued in article
Chronicle of Higher Education: Students Cheat. How Much Does It Matter?
Click Here
. . .
Trust your students, the pedagogical
progressives advise, and they’ll usually live up to it. But that has not
been Ajay Shenoy’s experience. In March, Shenoy, an assistant professor of
economics at the University of California at Santa Cruz, relaxed the
expectations for his winter-quarter final, making it open note and giving
students more time.
That hadn’t been Shenoy’s first impulse.
Initially, he thought he might make it harder to cheat by letting students
view just one question at a time, and randomizing the order of questions.
The test would be timed, and everyone would take it at once.
Then his students started to go home, and home
was all over the world. Between time zones and air travel, there was no way
he could expect them to all find the same two hours for an exam. Besides, he
realized, his students were, understandably, incredibly stressed.
Still, Shenoy required students to do their own
work. He even asked them to let him know if they heard about anyone
cheating.
After the exam, a couple of students came
forward. One had heard about classmates putting test questions on Chegg.
Another was pretty sure his housemates had cheated off their fraternity
brothers. Alarmed, Shenoy decided to investigate. In his research, Shenoy
uses natural-language processing to detect signs of political corruption. So
to understand the scope of the cheating, he wrote a simple computer program
to compare students’ exam responses. He uncovered an amount of cheating he
calls “stunning.”
It also bothered Shenoy that it seemed to be
common knowledge among his students that a number of their classmates were
cheating.
“This is the issue when people say you should
just trust students more,” Shenoy says. “Even if 99 percent of the students
don’t want to cheat, if that 1 percent is cheating — and if everyone else
knows about it — it’s a prisoner’s dilemma, right?” Students who are honest
know they are at a disadvantage, he says, if they don’t think the professor
is going to enforce the rules.
So Shenoy enforced the rules. He investigated
20 cases in his class of 312, and filed academic-misconduct reports for 18.
(Those weren’t the only students who cheated, Shenoy says. Through
documentation he got from Chegg, he knows many more students turned to the
site. But he had time to pursue only students who had submitted questions to
it.)
In-person exam cheating, Shenoy thought, is
ineffective, and probably doesn’t boost students’ grades all that much —
certainly no more than, well, studying more.
But
when he compared the grades of students who had cheated with those of their
classmates who didn’t, he found that the cheaters scored about 10 points
higher on the exam. “I guess it’s possible that the smarter students
were also the ones who chose to cheat,” Shenoy says. “But usually, in my
experience, it’s the other way around.”
Who’s hurt when students cheat? It’s their
loss, some professors will argue. It’s the cheaters who’ve squandered their
tuition payment, time, and opportunity to learn the material. Besides, their
actions will probably catch up to them eventually. That’s not how Shenoy
views it, though.
If cheating leads to a higher grade, says the
economist, then cheating is rational. “This was actually quite valuable to
the student,” Shenoy says. “At the expense of the other students.”
So Shenoy felt a responsibility. “Part of my reason
for putting so much time into pursuing this,” he says, “was just out of a
sense of justice for the other students.”
Continued in article
Jensen Comment
I continued to repeat my example of the 60+
students who were expelled for cheating in a political science class where every
student was assured of getting an A grade in the course if they did the
homework. Many reported they cheated (in this case plagiarized) because when
they were assured of an A grade irrespective of effort then their time was
better spent on courses where they were not assured of an A grade.
When some of students took my courses on a
pass-fail basis seldom was their performance on homework, term papers, and exams
nearly as good as most of my students taking the course for a letter grade. The
pass-fail students seemingly did not put the time and effort into learning as
the students who worked overtime for an A or B grade
Chronicle of Higher
Education: Seven Ways to Assess Students Online and Minimize Cheating
---
Click Here
Break
up a big high-stakes exam into small weekly tests.
Start
and end each test with an honor statement
Ask
students to explain their problem-solving process
Get
to know each student’s writing style in low- or no-stakes tasks
Assess learning in online discussion forums
Don’t
base grades solely on tests
Offer
students choice in how they demonstrate their knowledge.
Jensen Comment
If you base grades almost entirely upon examinations, make students take those
examinations in some type of testing center or have the exams proctored locally.
Concept Knowledge and Assessment of
Deep UnderstandingCompetency-Based
Learning (where teachers don't selectively assign grades) ---
https://en.wikipedia.org/wiki/Competency-based_learning
Critical Thinking ---
https://en.wikipedia.org/wiki/Critical_thinking
Over 400 Examples of Critical Thinking and
Illustrations of How to Mislead With Statistics ---
http://faculty.trinity.edu/rjensen/MisleadWithStatistics.htm
Western Governors University (with an entire
history of competency-based learning) ----
http://www.wgu.edu/
From a Chronicle of Higher
Education Newsletter on November 3, 2016
Over the past 20 years, Western Governors University has grown into a
formidable competency-based online education provider. It’s on just its
second president, Scott D. Pulsipher, a
former Silicon Valley executive, who stopped by our offices yesterday.
WGU has graduated more
than 70,000 students, from all 50 states. But a key part of the
institution’s growth strategy is local, using its affiliations with
participating states (not that all the partnerships
start
smoothly, mind
you). There are six of them, and more growth is on the way; Mr. Pulsipher
says WGU is in serious discussions to expand into as many as five more
states — he declines to name them — at a pace of one or two per year.
The university's main focus remains students, he says. One example is an
effort to minimize student loans. Through better advising, students are
borrowing, on average, about 20 percent less than they did three years ago,
amounting to savings of about $3,200. “Humans make better decisions,” Mr.
Pulsipher says, “when they have more information.” —Dan
Berrett
Western Governors University is a Leading Competency-Based Learning
University ---
https://en.wikipedia.org/wiki/Western_Governors_University
Here’s How Western Governors U. Aims to Enroll a Million Students ---
https://www.chronicle.com/article/Here-s-How-Western-Governors/243492?cid=at&utm_source=at&utm_medium=en&elqTrackId=0fe6b239932845ee9da44c2fa67cdf5f&elq=885d6ac654144af5aff9430a4640932d&elqaid=19192&elqat=1&elqCampaignId=8710
2016 Bibliography on Competency-Based
Education and Assessment ---
https://www.insidehighered.com/quicktakes/2016/01/26/rise-competency-based-education?utm_source=Inside+Higher+Ed&utm_campaign=0f02e8085b-DNU20160126&utm_medium=email&utm_term=0_1fcbc04421-0f02e8085b-197565045
Bob Jensen's threads on
Competency-Based Education and Assessment
---
See Below
Competency-Based Learning ---
https://en.wikipedia.org/wiki/Competency-based_learning
EDUCAUSE: Competency-Based Education (CBE) ---
https://library.educause.edu/topics/teaching-and-learning/competency-based-education-cbe
Mathematics Assessment Project (learning assessment) ---http://map.mathshell.org
Educause: 2016 Students and Technology Research Study ---
https://library.educause.edu/resources/2016/6/2016-students-and-technology-research-study
This hub provides findings from the 2016 student
study, part of the EDUCAUSE Technology Research in the Academic Community
research series. ECAR collaborated with 183 institutions to collect
responses from 71,641 undergraduate students across 25 countries about their
technology experiences. This study explores technology ownership, use
patterns, and expectations as they relate to the student experience.
Colleges and universities can use the results of this study to better engage
students in the learning process, as well as improve IT services, increase
technology-enabled productivity, prioritize strategic contributions of IT to
higher education, plan for technology shifts that impact students, and
become more technologically competitive among peer institutions.
Bob Jensen's Education Technology Threads ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Educause: Competency-based Education (CBE)
https://library.educause.edu/topics/teaching-and-learning/competency-based-education-cbe
The
competency-based education (CBE)
approach allows students to advance based on their ability to
master a skill or competency at their own pace regardless of environment.
This method is tailored to meet different learning abilities and can lead to
more efficient student outcomes. Learn more from the
Next Generation Learning Challenges about CBE models and grants in K-12
and higher education.
Organizations
·
CBEinfo - This site was created for schools to share lessons learned in
developing CBE programs.
·
Competency-Based Education Network (CBEN)
·
CAEL Jumpstart Program
·
CompetencyWorks
Competency Definition
·
Competency-Based Learning or Personalized Learning. This U.S. Department
of Education topic page includes links to various states and districts
putting CBL programs into action.
·
Principles for Developing Competency-Based Education Programs. Change
Magazine, April/March 2014. Sally M. Johnstone and Louis Soares
·
The Degree Qualifications Profile, Lumina
Bob Jensen's competency-based learning threads ---
http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge
Critical Thinking ---
https://en.wikipedia.org/wiki/Critical_thinking
What is Critical Thinking Anyway?
https://chroniclevitae.com/news/1691-what-is-critical-thinking-anyway?cid=wb&utm_source=wb&utm_medium=en&elqTrackId=b1a00d70cdda451babcad48a0b78f4fa&elq=dc026b5ac5f247e4a5cadb81f89631c7&elqaid=12462&elqat=1&elqCampaignId=5069
32 Animated Videos by Wireless Philosophy Teach You the Essentials of
Critical Thinking ---
http://www.openculture.com/2016/07/wireless-philosophy-critical-thinking.html
Authentic Assessment Toolbox (critical thinking assessments) ---
http://jfmueller.faculty.noctrl.edu/toolbox/index.htm
Also see
http://faculty.trinity.edu/rjensen/assess.htm
Carl Sagan’s Syllabus & Final Exam for His Course on Critical Thinking
(Cornell, 1986) ---
http://www.openculture.com/2018/01/carl-sagans-syllabus-final-exam-for-his-course-on-critical-thinking-cornell-1986.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29
Purdue University's New Competency-Based Undergraduate Degree
"Competency for the Traditional-Age Student," by Paul Fain,
Chronicle of Higher Education, March 30, 2016 ---
https://www.insidehighered.com/news/2016/03/30/purdue-u-gets-competency-based-education-new-bachelors-degree?utm_source=Inside+Higher+Ed&utm_campaign=8b78e204e3-DNU20160330&utm_medium=email&utm_term=0_1fcbc04421-8b78e204e3-197565045
Accreditor approves Purdue's new competency-based
bachelor's degree, which blends technical disciplines with the humanities
and has a customizable approach designed more for a career than a first job.
Competency-based education isn’t for
everyone, say even supporters of the emerging form of higher education.
Many of the 600 or so colleges that are
trying to add competency-based degrees are focused on adult, nontraditional
students who want a leg up in the job market. Some of those academic
programs have been developed in collaboration with specific industry
partners, where an employer’s endorsement of the credential can lead to a
graduate employee getting a promotion.
Other colleges' forays into competency-based
education have been in disciplines with professional licensing and a heavy
dose of task-based learning, which seems like an easier fit with academic
programs based on mastery rather than time in a classroom.
That seems
particularly true for research universities. For
example, the University of Michigan’s first competency-based degree is a
master’s of health professions education. And the University of Texas System
began with a bachelor’s in biomedical science.
The toughest nut to
crack for competency-based education appears to be bachelor’s degrees aimed
at traditional-age students. But that’s what Purdue University is doing with
a
newly approved bachelor’s in transdisciplinary
studies in technology. And the customizable, competency-based degree from
the new Purdue Polytechnic Institute combines technical disciplines with the
humanities.
Purdue’s personalized, interdisciplinary
approach is a promising one, said Charla Long, executive director of the
Competency-Based Education Network, a relatively new group of colleges and
universities.
“Competencies can be developed outside your
discipline,” she said, “and be as relevant to your discipline.”
Purdue also is less overtly focused on job
training -- or at least on graduates’ first jobs -- than some might expect
with a competency-based degree. In fact, the university's approach sounds
like an experimental form of liberal arts education.
“It’s about preparing students for life,”
said Jeff Evans, interim associate dean for undergraduate programs at
Purdue, who adds that graduates of the program “will be ready to adapt to
this fast-changing world.”
The public university
began working on the new competency-based degree
program in 2014. Mitch Daniels, Purdue’s president and Indiana's former
governor, previously created the
Purdue Polytechnic Institute, which has been
tasked with working on transformational forms of undergraduate education.
The institute, which is located at eight branch locations as well as
Purdue's main campus, won a university-sponsored contest with its idea for
the new competency-based degree.
Customization is a big part of the degree’s
novelty.
Incoming students will be able to work
one-on-one with a faculty mentor to create personalized plans of study,
Purdue said, which will blend technology-focused disciplines such as
computing, construction management, engineering, and aviation with social
sciences, the humanities and business.
“We’re trying to connect the passion of the
students with their journey of learning,” said Evans.
Continued in article
Bob Jensen/s threads on competency-based testing and degrees ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
In Norway, she said, "universities exchange papers for
grading." Objectivity is compromised by mere humanity. Educators who engage
personally with their students are psychologically vulnerable to bias in
grading.
Kathleen Tarr ---
http://chronicle.com/article/A-Little-More-Every-Day-/233303/?cid=at&utm_source=at&utm_medium=en
USA Department of
Education: Guidance on Competency-Based Education
Inside Higher Ed, September 23, 2015 ---
https://www.insidehighered.com/quicktakes/2015/09/23/guidance-competency-based-education?utm_source=Inside+Higher+Ed&utm_campaign=3d26811214-DNU20150923&utm_medium=email&utm_term=0_1fcbc04421-3d26811214-197565045
The U.S. Department Education said Tuesday it is poised to release an extensive reference guide for institutions that are participating in an experiment on competency-based education. Since that project was begun last year, the department said it became clear that more guidance was needed -- for both colleges and accrediting agencies.The department has yet to release the document publicly, but plans to post it at this link --- https://experimentalsites.ed.gov/exp/guidance.html
“We believe that this guide will offer tremendous support for both experienced and new competency-based education providers as they implement this experiment,” Ted Mitchell, the under secretary of education, said in a written statement. “We recognize that many of you were anticipating that the guide would be released earlier this summer, but it was very important for us to have a high level of confidence that the guidance it contains is on very firm ground.”
Competency-Based Learning ---
https://en.wikipedia.org/wiki/Competency-based_learning
"Measuring Competency," by Paul Fain, Inside Higher Ed,
November 25, 2015 ---
https://www.insidehighered.com/news/2015/11/25/early-glimpse-student-achievement-college-america-competency-based-degree-provider?utm_source=Inside+Higher+Ed&utm_campaign=389f6fe14e-DNU20151125&utm_medium=email&utm_term=0_1fcbc04421-389f6fe14e-197565045
Southern New Hampshire U's College for America
releases a promising early snapshot of the general-education learning and
skills of students who are enrolled in a new form of competency-based
education.
A preliminary snapshot of the academic skills of
students who are enrolled in a new, aggressive form of competency-based
education is out, and the results look good.
Southern New Hampshire University used an outside
testing firm to assess the learning and skills in areas typically stressed
in general education that were achieved by a small group of students who are
halfway through an associate degree program at the university’s College for
America, which offers online, self-paced, competency-based degrees that do
not feature formal instruction and are completely untethered from the
credit-hour standard.
The university was the
first to get approval from the U.S. Department of
Education and a regional accreditor for its direct-assessment degrees. A
handful of other institutions have since followed suit. College for America
currently enrolls about 3,000 students, most of whom are working adults. It
offers associate degrees -- mostly in general studies with a concentration
in business -- bachelor’s degrees and undergraduate certificates.
To try to kick the tires in a public way, College
for America used the Proficiency Profile from the Educational Testing
Service. The
relatively new test assesses students in
core skill areas of
critical thinking, reading, writing and mathematics. It also gives
“context-based” subscores on student achievement in the humanities, social
sciences and natural sciences. The results could be notable because skeptics
of competency-based education fear the model might not result in adequate
learning in these areas.
Continued in article
"How a 40-Year-Old Idea Became Higher Education’s Next Big Thing," by Dan
Barrett, Chronicle of Higher Education, October 28, 2015 ---
http://chronicle.com/article/How-a-40-Year-Old-Idea-Became/233976
. . .
These pressures are intersecting with another
mounting concern: educational quality. Together, these forces are feeding an
unusual bipartisan consensus, and they are prompting higher-education
leaders to take a fresh look at an old idea:
competency-based education.
It allows students to make progress at their own pace by demonstrating what
they know and can do instead of hewing to the timeline of the semester.
While this model has long been used to expand access and lower costs,
particularly for adult students, it is now attracting attention as a way to
shore up academic rigor.
But this surge in interest has also sparked
questions. How effective a method is it for students with varying levels of
preparedness, or is it really only suited for the academically talented who
can learn on their own? Can it assure educational quality, or is it just
being offered to the disadvantaged as a cut-rate version of the full college
experience?
The story of how competency-based education has
become the latest Next Big Thing after being around for four decades is a
tale of timing,
of money and
politics, and of shifting academic norms.
Advocates for competency-based learning have seen
Big Things get hyped in the past, only to flame out. Still, they hope that
this model of learning can ultimately achieve a grand goal: staking a claim
to, defining, and substantiating quality in higher education.
Just maybe, the new stage of development that Mr.
Jessup envisioned decades ago may finally be arriving.
A generation or two
after Mr. Jessup’s prediction, a different sort of challenge confronted
higher education. The end of the Vietnam War and broadening opportunities
for women meant that adults who were older than the core demographic of 18-
to 21-year-olds were flocking to college. But with jobs and families, they
did not have the luxury of spending hours each week in a classroom.
Competency-based education as a concept began in
that era, the 1970s, with programs emerging to serve those older students.
Places like Excelsior College (then Regents College), Thomas Edison State
College, DePaul University’s School for New Learning, and the State
University of New York’s Empire State College were among the first to offer
such programs. They wanted to expand access.
Then, as state support for higher education dropped
and tuition and student-loan debt rose, so did concerns about cost.
Those two goals, access and cost, have dominated
years of efforts to remake higher education. Now, a third goal — educational
quality — is driving change.
Competency-based learning may be able to achieve
all three goals, say its supporters. And, they add, it is quality that
matters most. "Its potential is for a much higher level of quality and a
greater attention to rigor," says Alison Kadlec, senior vice president of
Public Agenda, a nonprofit organization that is playing a leading role in
the growth of this model.
"The worst possible outcome," she said, "would be
that competency-based education becomes a subprime form of learning."
Continued in article
At Texas A&M
"New Graduates Test the Promise of Competency-Based Education," by Dan
Berritt, Chronicle of Higher Education, May 21, 2015 ---
http://chronicle.com/article/New-Graduates-Test-the-Promise/230315/?cid=at
. . .
Same Rigor, Different Method
The Commerce campus created its program in response
to a directive by Rick Perry, then the governor of Texas, for universities
to
develop bachelor's degree programs
that would cost students $10,000.
Led by the Texas Higher Education Coordinating
Board, faculty members and administrators at Commerce collaborated with
their peers at South Texas College, analyzing labor-force projections and
interviewing local employers. The data
suggested that the state would see growing demand
for midlevel managers with bachelor’s degrees in manufacturing and the
service industry. So the professors and administrators designed
a bachelor of applied arts and sciences
in organizational leadership, with a
largely standardized series of courses and a
competency-based model. The development phase attracted money from the Bill
& Melinda Gates Foundation and Educause, and the program is now delivered in
hybrid form, in person and online, at South Texas and entirely online
through Commerce.
Students pay $750 for a seven-week term, during
which they complete as many "competencies" as they can. That means mastering
skills like problem-solving and applied research as demonstrated on written
assignments or video presentations. The competencies are woven into courses
for the major as well as general-education requirements.
The biggest stumbling block for faculty members was
terminology, said Ricky F. Dobbs, a professor of history at Commerce and
dean of its University College.
"You can make the word ‘competency’ mean just about
anything," he said. As part of a team of faculty members and administrators
that was creating the program, Mr. Dobbs and his colleagues used
learning outcomes defined by the Association of
American Colleges and Universities to develop a set of broad competencies in
areas like change management, organizational behavior, and information
literacy. The group of instructors across campuses arrived at a common
understanding: Their task was to think about how their various disciplines
helped students develop skills.
To use quantitative data to make decisions, for
example, students must read a paper on data analysis in government and watch
a video on big data in corporations. On discussion boards, the students
answer questions about the material and respond to their peers.
To finish off that particular competency, students
write at least 250 words describing the utility of statistics, offering
three examples of how the field "makes a difference in all our lives, all
the time." Incorporating personal examples, they must explain how
translating data into information can help decision-making.
The program design is not well suited to
traditional-age students, Mr. Dobbs said, because those enrolled must
complete assignments largely on their own, often applying material they’ve
learned in the workplace. "It’s the same rigor," he said. "It’s simply a
different method of presenting it to a different population."
New Perspectives
Among the new graduates, several found the
experience academically challenging, even occasionally overwhelming.
R. Michael Hurbrough Sr. said that it was one of
the most difficult efforts he’d undertaken, and that he often felt like
abandoning it. But he stuck with it, crediting help from Commerce faculty.
Continued in article
Jensen Comment
There are controversies that guardhouse lawyers in the academy will raise
(follow the comments at the end of this article as they unfold). Firstly, we
might challenge the phrase "same rigor." In competency-based examinations there
may well be more rigor in terms of technical detail and grading (recall how
Coursera flunked almost everybody in a computer science course at San Jose
State). But there is much less rigor in terms of class participation such as
participation in class analysis of comprehensive cases such as those that are
central to the Harvard Business Schools and literally all onsite law schools.
Secondly there are barriers to entry for some professions. To sit for the CPA
examination degrees are not necessary but students must complete 150 hours of
college credit in universities allowed by state boards of accountancy. Most
state boards also have requirements as to the courses that must be passed in
selected areas of accounting, business, information systems, and business law.
If you must have approved 150 hours of credit why not get a masters degree like
most students who now sit for the CPA examination?
I'm convinced that the day will come when a student's transcript will have
college degrees replaced by scores of badges of accomplishment in terms of
course credits and competency-based badges areas where no courses were taken for
credit (such as MOOC courses). But we are a long way off before professions will
accept these types of transcripts.
Badges and certifications will probably replace college diplomas in terms for
both landing jobs and obtaining promotions in the future.
But not all badges and certifications are created equally. The best ones will
be those that have both tough prerequisites and tough grading and tough
experience requirements. There's precedence for the value of certifications in
medical schools. The MD degree is now only a
prerequisite for such valuable certifications in ophthalmology, orthopedics,
neurology cardio-vascular surgery, etc.
What will be interesting is to see how long it will take
badges/certifications to replace Ph.D. degrees for landing faculty jobs in
higher education. At present specializations are sort of ad hoc without
competency-based testing. For example, accounting professors can advance to
specialties like auditing and tax corporate tax accounting with self-study and
no competency-based testing. This may change in the future (tremble, tremble).
Watch the video at
https://www.youtube.com/watch?v=5gU3FjxY2uQ
The introductory screen on the above video reads as follows (my comments are in
parentheses)
In Year 2020 most colleges and universities no
longer exist (not true since residential colleges provide so much
more than formal education)
Academia no longer the gatekeeper of education
(probably so but not by Year 2020)
Tuition is an obsolete concept (a
misleading prediction since badges will not be free in the USA that already
has $100 trillion in unfunded entitlements)
Degrees are irrelevant (yeah,
one-size-fits-all diplomas are pretty much dead already)
What happened to education?
What happened to Epic?
Competency-Based Learning ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
"If B.A.’s Can’t Lead Graduates to Jobs, Can Badges Do the Trick?" by
Goldie Blumenstyk, Chronicle of Higher Education, March 2, 2015 ---
http://chronicle.com/article/If-BA-s-Can-t-Lead/228073/?cid=at&utm_source=at&utm_medium=en
Employers say they are sick of encountering new
college graduates who lack job skills. And colleges are sick of hearing that
their young alumni aren’t employable.
Could a new experiment to design employer-approved
"badges" leave everyone a little less frustrated?
Employers and a diverse set of more than a
half-dozen universities in the Washington area are about to find out,
through a project that they hope will become a national model for
workplace badges.
The effort builds on the burgeoning national
movement for badges and other forms of "microcredentials." It also pricks
at much broader questions about the purpose and value of a college degree in
an era when nearly nine out of 10 students say their top reason for going to
college is to get a good job.
The "21st Century Skills Badging Challenge" kicks
off with a meeting on Thursday. For the next nine months, teams from the
universities, along with employers and outside experts, will try to pinpoint
the elements that underlie skills like leadership, effective storytelling,
and the entrepreneurial mind-set. They’ll then try to find ways to assess
students’ proficiency in those elements and identify outside organizations
to validate those skills with badges that carry weight with employers.
The badges are meant to incorporate the traits most
sought by employers, often referred to as "the four C’s": critical thinking,
communication, creativity, and collaboration.
"We want this to become currency on the job
market," says Kathleen deLaski, founder of the
Education Design Lab,
a nonprofit consulting organization that is coordinating the project.
No organizations have yet been selected or agreed
to provide validations. But design-challenge participants say there’s a
clear vision: Perhaps an organization like
TED issues a badge in storytelling. Or a company
like
Pixar, or
IDEO, the design and consulting firm, offers a
badge in creativity.
If those badges gain national acceptance, Ms.
deLaski says, they could bring more employment opportunities to students at
non-elite colleges, which rarely attract the same attention from recruiters
as the Ivies, other selective private colleges, or public flagships. "I’m
most excited about it as an access tool," she says.
‘Celebrating’ and ‘Translating’
The very idea of badges may suggest that the
college degree itself isn’t so valuable—at least not to employers.
Badge backers prefer a different perspective. They
say there’s room for both badges and degrees. And if anything, the changing
job market demands both.
Through their diplomas and transcripts, "students
try to signal, and they have the means to signal, their academic
accomplishments," says Angel Cabrera, president of George Mason University,
which is involved in the project. "They just don’t have the same alternative
for the other skills that employers say they want."
Nor is the badging effort a step toward
vocationalizing the college degree, participants say. As Ms. deLaski puts
it: "It’s celebrating what you learn in the academic setting and translating
it for the work force."
Yet as she and others acknowledge, badges by
themselves won’t necessarily satisfy employers who now think graduates don’t
cut it.
That’s clear from how employer organizations that
may work on the project regard badges. "We’re presuming that there is an
additional skill set that needs to be taught," says Michael Caplin,
president of the Tysons Partnership, a Northern Virginia
economic-development organization. "It’s not just a packaging issue."
In other words, while a move toward badges could
require colleges to rethink what they teach, it would certainly cause them
to re-examine how they teach it. At least some university partners in the
badging venture say they’re on board with that.
"Some of what we should be doing is reimagining
some disciplinary content," says Randall Bass, vice provost for education at
Georgetown University, another participant in the project.
Mr. Bass, who also oversees the
"Designing the Future(s) of the University"
project at Georgetown, says many smart curricular changes that are worth
pursuing, no matter what, could also lend themselves to the goals of the
badging effort. (At the master’s-degree level, for example, Georgetown has
already begun offering a one-credit courses in grant writing.)
"We should make academic work more like work," with
team-based approaches, peer learning, and iterative exercises, he says.
"People would be ready for the work force as well as getting an engagement
with intellectual ideas."
Employers’ gripes about recent college graduates
are often hard to pin down. "It depends on who’s doing the whining," Mr.
Bass quips. (The critique he does eventually summarize—that employers feel
"they’re not getting students who are used to working"—is a common one.)
Where Graduates Fall Short
So one of the first challenges for the badging
exercise is to better understand exactly what employers want and whether
colleges are able to provide it—or whether they’re already doing so.
After all, notes Mr. Bass, many believe that
colleges should produce job-ready graduates simply by teaching students to
be agile thinkers who can adapt if their existing careers disappear. "That’s
why I think ‘employers complain, dot dot dot,’ needs to be parsed," he says.
Mr. Caplin says his organization plans to poll its
members to better understand where they see college graduates as falling
short.
Continued in article
MOOCs ---
http://en.wikipedia.org/wiki/MOOCs
Coursera ---
http://en.wikipedia.org/wiki/Coursera
Coursera /kɔərsˈɛrə/ is a for-profit educational
technology company founded by computer science professors Andrew Ng and
Daphne Koller from Stanford University that offers massive open online
courses (MOOCs). Coursera works with universities to make some of their
courses available online, and offers courses in physics, engineering,
humanities, medicine, biology, social sciences, mathematics, business,
computer science, and other subjects. Coursera has an official mobile app
for iOS and Android. As of October 2014,
Coursera has 10 million users in 839 courses from 114 institutions.
Continued in article
Jensen Comment
Note that by definition MOOCs are free
courses generally served up by prestigious or other highly respected
universities that usually serve up videos of live courses on campus to the world
in general. MOOC leaders in this regard have been MIT, Stanford, Harvard, Penn,
and other prestigious universities with tens of billions of dollars invested in
endowments that give these wealthy universities financial flexibility in
developing new ways to serve the public.
When students seek some type of transcript "credits" for MOOCs the "credits"
are usually not free since these entail some types of competency hurdles such as
examinations or, at a minimum, proof of participation. The "credits" are not
usually granted by the universities like Stanford providing the MOOCs.
Instead credits, certificates, badges or whatever are provided by private sector
companies like Coursera, Udacity, etc.
Sometimes Coursera contracts with a college wanting to give its students
credits for taking another university's MOOC such as the now infamous instance
when more than half of San Jose State University students in a particular MOOC
course did not pass a Coursera-administered final examination.
"What Are MOOCs Good For? Online courses
may not be changing colleges as their boosters claimed they would, but they can
prove valuable in surprising ways," by Justin Pope, MIT's Technology
Review, December 15, 2014 ---
http://www.technologyreview.com/review/533406/what-are-moocs-good-for/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20141215
The following describes how a company, Coursera, long involved with the
history of MOOCs, is moving toward non-traditional "credits" or
"microcredentials" in a business model that it now envisions for itself as a
for-profit company. Also note that MOOCs are still free for participants not
seeking any type of microcredential.
And the business model described below probably won't apply to thousands of
MOOCs in art, literature, history, etc. It may apply to subsets of business and
technology MOOCs, but that alone does not mean the MOOCs are no longer free for
students who are not seeking microcredentials. They involve payments for the "microcredentials"
awarded for demonstrated competencies. However these will be defined in the
future --- not necessarily traditional college transcript credits. A better term
might be "badges of competency." But these will probably be called
microcredentials.
Whether or not these newer types of microcredentials are successful
depends a great deal on the job market.
If employers begin to rely upon them, in addition to an applicant's traditional
college transcript, then Coursera's new business model may take off. This makes
it essential that Coursera carefully control the academic standards for their
newer types of "credits" or "badges."
"Specializations, Specialized," by Carl
Straumsheim, Inside Higher Ed, February 12, 2015 ---
https://www.insidehighered.com/news/2015/02/12/coursera-adds-corporate-partners-massive-open-online-course-sequences
Massive open online course providers such
as Coursera have long pointed to the benefits of the data collected by the
platforms, saying it will help colleges and universities understand how
students learn online. Now Coursera’s data is telling the company that
learners are particularly interested in business administration and
technology courses to boost their career prospects -- and that they want to
take MOOCs at their own pace.
As a result, Coursera will this year
offer more course sequences, more on-demand content and more partnerships
with the private sector.
Asked if Coursera is closer to
identifying a business model, CEO Rick Levin said, “I think we have one. I
think this is it.”
Since its founding in 2012, Coursera has
raised millions of dollars in venture capital
while searching for a business model. Many questioned if the
company's original premise -- open access to the world's top professors --
could lead to profits, but with the introduction of a verified certificate
option, Coursera
began to make money
in 2013. By that October, the company had earned its first million.
In the latest evolutionary step for its
MOOCs, Coursera on Wednesday
announced a series of capstone projects developed
by its university partners in cooperation with companies such as Instagram,
Google and Shazam. The projects will serve as the final challenge for
learners enrolled in certain Specializations -- sequences of related courses
in topics such as cybersecurity, data mining and entrepreneurship that
Coursera
introduced last year. (The company initially
considered working with Academic Partnerships before both companies created
their version of Specializations.)
The announcement is another investment
by Coursera in the belief that adult learners, years removed from formal
education, are increasingly seeking microcredentials -- bits of knowledge to
update or refresh old skills. Based on the results from the past year, Levin
said, interest in such credentials is "palpable." He described bundling
courses together into Specializations and charging for a certificate as “the
most successful of our product introductions." Compared to when the
sequences were offered as individual courses, he said, enrollment has “more
than doubled” and the share of learners who pay for the certificate has
increased “by a factor of two to four.”
“I think people see the value of the
credential as even more significant if you take a coherent sequence,” Levin
said. “The other measure of effectiveness is manifest in what you’re seeing
here: company interest in these longer sequences.”
Specializations generally cost a few
hundred dollars to complete, with each individual course in the sequence
costing $29 to $49, but Coursera is still searching for the optimal course
length. This week, for example, learners in the Fundamentals of Computing
Specialization were surprised to find its three courses had been split into
six courses, raising the cost of the entire sequence from $196 to $343.
Levin called it a glitch, saying learners will pay the price they initially
agreed to.
The partnerships are producing some
interesting pairings. In the Specialization created by faculty members at
the University of California at San Diego, learners will “design new social
experiences” in their capstone project, and the best proposals will receive
feedback from Michel "Mike" Krieger, cofounder of Instagram. In the
Entrepreneurship Specialization out of the University of Maryland at College
Park, select learners will receive an opportunity to interview with the
accelerator program 500 Startups.
As those examples suggest, the benefits
of the companies’ involvement mostly apply to top performers, and some are
more hypothetical than others. For example, in a capstone project created by
Maryland and Vanderbilt University faculty, learners will develop mobile
cloud computing applications for a chance to win tablets provided by Google.
“The best apps may be considered to be featured in the Google Play Store,”
according to a Coursera press release.
Anne M. Trumbore, director of online
learning initiatives at the University of Pennsylvania’s Wharton School,
said the capstone projects are an “experiment.” The business school, which
will offer a Specialization sequence in business foundations, has partnered
with the online marketplace Snapdeal and the music identification app
Shazam, two companies either founded or run by Wharton alumni.
“There’s not a sense of certainty about
what the students are going to produce or how the companies are going to use
it,” Trumbore said. “Snapdeal and Shazam will look at the top projects
graded highest by peers and trained staff. What the companies do after that
is really up to them. We have no idea. We’re casting this pebble into the
pond.”
Regardless of the companies' plans,
Trumbore said, the business school will waive the application fee for the
top 15 learners in the Specialization and provide scholarship money to those
that matriculate by going through that pipeline.
“The data’s great, but the larger
incentive for Wharton is to discover who’s out there,” Trumbore said.
Levin suggested the partnering companies
may also be able to use the Specializations as a recruitment tool. “From a
company point of view, they like the idea of being involved with educators
in their fields,” he said. “More specifically, I think some of the companies
are actually hoping that by acknowledging high-performing students in a
couple of these capstone projects they can spot potential talent in
different areas of the world.”
While Coursera rolled out its first
Specializations last year, Levin said, it also rewrote the code powering the
platform to be able to offer more self-paced, on-demand courses. Its MOOCs
had until last fall followed a cohort model, which Levin said could be
“frustrating” to learners when they came across an interesting MOOC but were
unable to enroll. After Coursera piloted an on-demand delivery method last
fall, the total number of such courses has now reached 47. Later this year,
there will be “several hundred,” he said.
“Having the courses self-paced means
learners have a much higher likelihood of finishing,” Levin said. “The idea
is to advantage learners by giving them more flexibility.”
Some MOOC instructors would rather have
rigidity than flexibility, however. Levin said some faculty members have
expressed skepticism about offering on-demand courses, preferring the
tighter schedule of a cohort-based model.
Whether it comes to paid Specializations
versus free individual courses or on-demand versus cohort-based course
delivery, Levin said, Coursera can support both. “Will we develop more
Specializations? Yes. Will we depreciate single courses? No,” he said. “We
don’t want to discourage the wider adoption of MOOCs.”
Continued in article
Bob Jensen's threads on MOOCs are at
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Beyond the Essay: Making Student Thinking Visible in the Humanities (a
brainstorming project on teaching critical thinking) ---
http://cft.vanderbilt.edu/guides-sub-pages/beyond-the-essay/
Bob Jensen's threads on critical thinking and why it's so hard to teach ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#CriticalThinking
Also see
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
"Beyond Critical Thinking," by Michael S. Roth, Chronicle of Higher
Education's Chronicle Review, January 3, 2010 ---
http://chronicle.com/article/Beyond-Critical-Thinking/63288/
Learn Psychology ---
http://www.learnpsychology.org/
This is a very good article on the major issues of competency-based
assessment of learning
"Performance-Based Assessment," by Steven Mintz, Inside Higher
Ed, April 29, 2015 ---
https://www.insidehighered.com/blogs/higher-ed-beta/performance-based-assessment
. . .
In contrast, classroom discussions, debates, and
case studies tend to emphasize analysis, synthesis, and evaluation. Students
are typically asked to offer a critique or assessment, identify bias,
present a judgment, or advance a novel interpretation.
Performance-based assessment offers a valuable
alternative (or supplement) to the standard forms of student
evaluation. Performance-based assessment requires students to solve a
real-world problem or to create perform, or produce something with
real-world application. It allows an instructor to assess how well students
are able to use essential skills and knowledge, think critically and
analytically, or develop a project. It also offers a measure of the depth
and breadth of a student’s proficiencies.
Performance-based assessment can, in certain
instances, simply be an example of what Bloom’s Taxonomy calls
application. Thus, a student or a team might be asked to apply knowledge and
skills to a particular task or problem.
But performance-based assessment can move beyond
Bloom’s Taxonomy when students are engaged in a project that requires them
to display creativity and that results in an outcome, project, or
performance that is genuinely new. The more sophisticated performance
assessments involve research, planning, design, development, implementation,
presentation, and, in the case of team-based projects, collaboration.
If performance-based assessments are to be fair,
valid, and reliable, it is essential that there is an explicit rubric that
lays out the criteria for evaluation in advance. It is also helpful to ask
students to keep a log or journal to document the project’s development and
record their reflections on the developmental process.
The most commonly used assessments – the midterm
and final or the term paper – have an unpleasant consequence. Reliance on a
small number of high stakes assessments encourages too many students to
coast through the semester and to pull all-nighters when their grade is on
the line. This may inadvertently encourage a party culture.
In stark contrast, performance-based assessment
offers a way to ensure that evaluation is truly a learning experience, one
that engages students and that measures the full range of their knowledge
and proficiencies.
Steven Mintz is Executive Director of the University of Texas System's
Institute for Transformational Learning and Professor of History at the
University of Texas at Austin. Harvard University Press will publish his
latest book, The Prime of Life: A History of Modern Adulthood, next
month.
Arizona State's Freshman Year MOOCs Open to All With Final Examinations for
Inexpensive Credits
"Arizona State and edX Will Offer an Online Freshman Year, Open to All,"
by Charles Huckabee, Chronicle of Higher Education, April 23, 2015 ---
http://chronicle.com/blogs/ticker/arizona-state-and-edx-will-offer-an-online-freshman-year-open-to-all/97685?cid=wc&utm_source=wc&utm_medium=en
Arizona State University is joining with the MOOC
provider edX in a project that it says “reimagines the freshman year” and
opens a new low-cost, low-risk path to a college degree for students
anywhere in the world.
The project, called the
Global Freshman Academy, will offer a set of eight
courses designed to fulfill the general-education requirements of a freshman
year at Arizona State at a fraction of the cost students typically pay, and
students can begin taking courses without going through the traditional
application process, the university said in a news release on Wednesday.
Because the classes are offered as
massive open online courses, or
MOOCs, there is no limit on how many students can
enroll.
. . .
The courses to be offered through the Global
Freshman Academy are being designed and will be taught by leading scholars
at Arizona State. “These courses are developed to their rigorous standards,”
Adrian Sannier, chief academic officer for
EdPlus at ASU, said
in the release. “Course faculty are committed to ensuring their students
understand college-level material so that they can be prepared to
successfully complete college.”
Students who pass a final examination in a course
will have the option of paying a fee of no more than $200 per credit hour to
get college credit for it.
Mr. Agarwal and Mr. Crow are scheduled to formally
announce the project at a conference in Washington on Thursday.
Jensen Comments and Questions
The real test is how well these credits are accepted by other universities for
transfer credit. It probably will not be an issue for graduate school admission
since there are three more years of more traditional onsite or online credits.
But it could be a huge issue for example when a student takes the first year of
ASU MOOC credits and then tries to have these credits accepted by other
universities (such as TCU) that still resist accepting any online courses for
transfer credit.
Question
What are the main differences between MOOC online credits and traditional online
credits such as those documented at the following site?
http://faculty.trinity.edu/rjensen/CrossBorder.htm
For example, at many universities these days there are multiple sections of a
course where some sections are onsite and some are online. Often they are taught
by the same instructor. The online sections are usually as small or even smaller
than the onsite sections because online instructors often have more student
interactions such as in instant messaging not available to onsite students ---
http://en.wikipedia.org/wiki/Instant_messaging
Answer
These are the following obvious differences between MOOC online credits and
traditional online credits.
- The huge difference between the ASU MOOC year of courses and the
University of Pennsylvania Wharton School MOOC year of courses is that the
Wharton School MOOC courses are not
available for credit (and therefore are free). The ASU MOOC courses are
available for credits that will not be totally free, although they will be
available at greatly discounted prices.
- MOOC courses are open to everybody in the world and have no admission
standards.
- These are not intended to be equivalent to advanced placement (AP)
credits where students eventually fill in course requirements with
other more advanced courses. The ASU MOOC courses have no requirements to
earn substitute credits. Universities do vary with respect to substitution
requirements for AP credit, and many do not require taking added replacement
courses ---
http://en.wikipedia.org/wiki/Advanced_Placement
I suspect that at some universities the ASU MOOCs will be similar to AP
credits except that the competency-examination process is different.
- MOOC courses generally have no limits to class size.
- MOOC courses do not have prerequisites such as a MOOC calculus course or
linear algebra that has no prerequisites.
- MOOC courses are generally very large such that student interactions
online with instructors and/or other students are virtually non-existent.
- MOOC courses generally do not have graded writing assignments such as
term papers.
- MOOC courses do not have graded homework.
- MOOC courses do not have graded team projects, whereas team projects are
common in smaller traditional online courses.
- MOOC courses generally do not have class attendance requirements or
class participation requirements even though they generally do have
classes. The first MOOC course ever offered was an artificial intelligence
course at Stanford University where students enrolled in the course on
campus has the option of not attending class. Some faculty feel like some
course courses should have required course attendance and course
participation.
The bottom line is that it appears that the ASU freshman year MOOC course
credits will be little more than competency-based
credits. This will be controversial since many faculty in higher
education feel like credits in general education core courses should
entail class participation, including first-year core courses. For example, at
Trinity University there is a first-year seminar that all new students take in
very small classes that require a lot of class participation in discussions of
assigned readings and the writing of term papers. I think some sections of this
seminar don't even have examinations. I did not have examinations when I taught
a section of this seminar for two years.
In traditional large lectures courses on campus students typically are broken
out into accompanying recitation sections intended for class participation and
interactions with a recitation instructor.
Jensen Note
I never anticipated competency-based credits in the first-year of college. I
think these will be wildly popular in advance-level training courses such as a
CPA examination review course in the final (fifth) year of an accounting
program. Using competency-based courses for first-year general education courses
is more controversial.
Competency-Based Degrees Without Course Requirements
"U.S. Approval for Wisconsin Competency-Based Program," Inside Higher Ed,
September 3, 2014 ---
https://www.insidehighered.com/quicktakes/2014/09/03/us-approval-wisconsin-competency-based-program
Jensen Comment
There are somewhat similar options at other universities like the University of
Akron, Southern New Hampshire, and Capella.
We seem to have come full circle from the 19th Century when the University of
Chicago gave course credits for passing final examinations even if students did
not attend classes.
"Capella Gets Federal Approval for Competency-Based Degrees,"
Inside Higher Ed, August 13, 2013 ---
http://www.insidehighered.com/quicktakes/2013/08/13/capella-gets-federal-approval-competency-based-degrees
The University of Northern Arizona Offers a Dual Transcript Option, One of
Which is Competency-Based
"Competency-Based Transcripts," by Paul Fain, Inside Higher Ed,
August 9, 2013 ---
http://www.insidehighered.com/news/2013/08/09/northern-arizona-universitys-new-competency-based-degrees-and-transcripts
Jensen Comment
This program at Northern Arizona differs from the competency-based programs at the University of
Wisconsin, the University of Akron, Capella University, and Southern New Hampshire University in
that students at Northern Arizona must sign up for online courses at Northern Arizona before
becoming eligible for the competency-based transcript. It differs from Western
Governors University in that there are two transcripts rather than just a
competency-based transcript for online courses.
Capella may have a more difficult time getting employers and graduate schools
to accept Capella's competency-based transcript credit in general relative
to the University of Wisconsin, the University of Akron, and Southern New
Hampshire University. Time will tell. Much depends upon other criteria such as
SAT scores, GRE scores, GMAT scores, LSAT scores, MCAT scores, and professional
licensing examination scores.
December 19. 2014 Department of Education Letter
Q&A Regarding Competency-Based College Credits (and merit badges of
competence)
http://ifap.ed.gov/dpcletters/GEN1423.html
Bob Jensen's threads on competency-based education.
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
Note that there are two very different types of programs --- those that
require courses versus those that require no courses. For example,
Western Governors University requires course credits where distance education
course instructors do not assign grades in a traditional manner. Instead grading
is based on competency-based performance examinations are required.
At the other extreme a few universities like the University of Wisconsin now
have selected programs where students can earn college credits based upon
competency-examination scores without course sign ups. These programs are
considered the first steps toward what is increasingly known as a transcript of
merit badges that may eventually replace traditional degree programs such as
masters degrees in the professions such as medical professions.
In a sense residency programs in medical schools are already have "merit
badges" based upon upon experience and competency (licensing) examinations to
become ophthalmologists, cardiologists, urologists, neurologists, etc.
Video: A Scenario of Higher Education in 2020
November 14, 2014 message from Denny Beresford
Bob,
The link below is to a very
interesting video on the future of higher education – if you haven’t seen it
already. I think it’s very consistent with much of what you’ve been saying.
Denny
http://www.youtube.com/watch?v=5gU3FjxY2uQ
November 15, 2014 reply from Bob Jensen
Hi Denny,
Thank you for this link. I agree with many parts of this possible
scenario, and viewers should patiently watch it through the Google Epic in
2020.
But this is only one of many possible scenarios, and I definitely do not
agree with the predicted timings. None of the predictions for the future
will happen in such a short time frame.
It takes a long time for this video to mention the role of colleges as a
buffer between living as a protected kid at home and working full time on
the mean streets of life. And I don't think campus living and learning in
the future will just be for the "wealthy." We're moving toward a time when
campus living will be available more and more to gifted non-wealthy
students. But we're also moving toward a time when campus living and
learning may be available to a smaller percentage of students --- more like
Germany where campus education is free, but only the top 25% of the high
school graduates are allowed to go to college. The other 75% will rely more
and more on distance education and apprenticeship training alternatives.
Last night (November 14) there was a fascinating module on CBS News about
a former top NFL lineman (center) for the Rams who in the prime of his
career just quit and bought a 1,000 acre farm in North Carolina using the
millions of dollars he'd saved until then by playing football.
What was remarkable is that he knew zero about farming until he started
learning about it on YouTube. Now he's a successful farmer who gives over
20% of his harvest to food banks for the poor.
This morning I did a brief search and discovered that there are tons of
free videos on the technical aspect of farming just as there are tons of
videos that I already knew about on how to be a financial analyst trading in
derivative financial instruments.
My point is that there will be more and more people who are being
educated and trained along the lines of the video in your email message to
me.
http://www.youtube.com/watch?v=5gU3FjxY2uQ
The education and training will be a lifelong process because there is so
much that will be available totally free of charge. We will become more and
more like Boy-Girl Scouts earning our badges.
College degrees will be less and less important as the certification
badges (competency achievements) mentioned in the video take over as
chevrons of expertise and accomplishment. Some badges will be for hobbies,
and some badges will be for career advancement.
These are exciting times for education and training. We will become more
and more like the Phantom of the Library at Texas A&M without having to live
inside a library. This "Phantom" Aggie was a former student who started
secretly living and learning in the campus library. Now the world's free
"library" is only a few clicks away --- starting with Wikipedia and YouTube
and moving on to the thousands of MOOCs now available from prestigious
universities ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Also see the new-world library alternatives at
http://faculty.trinity.edu/rjensen/bookbob2.htm
Thanks Denny
Bob
"Looking Into Competency-Based Education," Inside Higher Education,
January 26, 2015 ---
https://www.insidehighered.com/quicktakes/2015/01/26/looking-competency-based-education
A
growing number of colleges are offering
competency-based degrees, and the emerging form of higher education has
caught the attention of state and federal policy makers. Yet few researchers
have taken an in-depth look at the range of competency-based programs. A
new paper from the American Enterprise Institute's
Center on Higher Education Reform tries to change this.
The paper by Robert Kelchen, an assistant professor
of education at Seton Hall University, is the first in a series that will
seek to "explore the uncharted landscape." Kelchen concludes that
competency-based education has the potential to "streamline the path to a
college degree for a significant number of students." Yet many questions
remain about who is currently enrolled in these programs, he wrote, or how
the degree tracks are priced.
"Competency, Texas-Style November 6, 2014," By Paul Fain, Inside
Higher Ed, November 6, 2014 ---
https://www.insidehighered.com/news/2014/11/06/competency-based-health-profession-credentials-university-texas-system
The University of Texas System plans to make its
first foray into competency-based education fittingly far-reaching.
The system’s forthcoming “personalized” credentials
will be limited to the medical sciences, for now. But the new,
competency-based curriculum will involve multiple institutions around the
state, system officials said, with a track that eventually will stretch from
high school, or even middle school, all the way to medical school.
Many details still need to be hashed out about the
project, which the system
announced this week. But several key elements are
in place.
Continued in article
Jensen Comment
Competency-based college credits are now widely available from both non-profit
and for-profit universities. However, the programs are very restricted to
certain disciplines, often graduate studies. In Western Canada, for example, the
Chartered Accountancy School of Business (CASB) has offered a competency-based
masters degree for years. However, students do enroll in courses and have
extensive internships on the job ---
http://www.casb.com/
Bob Jensen's threads on competency-based college credits ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
College Credits Without Courses
"Managing Competency-Based Learning," by Carl Straumsheim, Inside Higher
Ed, September 29, 2014 ---
https://www.insidehighered.com/news/2014/09/29/college-america-spins-its-custom-made-learning-management-system
Southern New Hampshire University, seeing
an opening in the market for a learning management system designed around
competency-based education, is spinning off the custom-made system it built
to support College for America.
Before
College for America
launched in January 2013, the university considered
building a platform to support the competency-based education subsidiary on
top of the learning management system used on campus, Blackboard Learn. The
university instead picked Canvas, created by Instructure, but after only a
couple of months, “we decided we needed to build our own,” said Paul J.
LeBlanc, president of the university.
For most colleges and universities, any
one of the major learning management systems on the market will likely meet
their requirements for posting course content and engaging with students
outside the classroom. But for institutions that don’t tie academic progress
to the course or the credit hour -- or have an
unconventional method of delivering education --
those same systems may be restrictive.
“We speak of the world of LMSes as a
world that’s designed around content delivery, course delivery and the
mechanics of running a course,” LeBlanc said. “It’s very course-centric, so
we built our program on the basis of our relationship with our students.”
LeBlanc and College for America are
calling it a “learning relationship management system,” a composite term to
describe a learning management system build on top of Salesforce, the
popular customer relationship management software. LeBlanc said the system
aims to strike a balance between “lots of things that CIOs love” -- such as
software as a service and cloud hosting -- with “what educators love.”
For students, the
system looks more like a social network than a
learning management system. When they log in, students are greeted by an
activity feed, showing them a tabbed view of their current projects, goals
and feedback. A column on the right side of the screen lists connections and
to-dos, and a bar along the top tracks progress toward mastering
competencies.
Behind the scenes, faculty members and
administrators are treated to a stream of data about everything students do
inside the system, from when they submitted their paperwork and their
statement of purpose to the surveys they have answered and the time spent
talking to academic coaches.
“I think this next generation of systems
is really going to be about data and analytics and relationship management,”
LeBlanc said. “The whole shift in conversation, it seems to me, is about
student-centeredness.”
On Oct. 1, one year after the system went
live at College for America, the university is spinning it off as
Motivis Learning
and writing the for-profit subsidiary a $7 million
check. In its first phase, LeBlanc said, the company will further develop
its platform based on how other institutions are approaching
competency-based learning.
One of Motivis’s early design partners,
the University of Central Missouri, hopes to use system to cut down on
administrative overlap. Its
Missouri Innovation Campus program, which gives
students an opportunity to earn a bachelor’s degree two years after
graduating high school, has in its first year attempted to tie together data
from a school district, a community college and a four-year institution with
manual spreadsheet work.
“We’ve likened it to trying to cobble
together three different student information systems, three different
registrations ..., three student IDs, three admissions portfolios,” said
Charles M. (Chuck) Ambrose, president of the university. “What we’re trying
to envision is that this LMS will help move us to a superhighway or an
Autobahn.”
The university will also be able to
invite local businesses into the system, allowing internship supervisors to
log students’ progress instead of filling out a paper form, Ambrose said.
Central Missouri’s model is one of many
Motivis is interested in tweaking its system to support, said Brian Peddle,
College for America’s chief technology officer, who will become the
company's CEO. One idea, he said, is to produce the common features of any
learning management system, then offer “building blocks” to support
traditional courses, competency-based learning and other modes of delivery.
Continued in article
Bob Jensen's threads on alternative universities that now have
competency-based learning alternatives ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
Some like Western Governors University require course enrollments but grade
on the basis of competency-based examinations.
Others like the University of Wisconsin, the University of Akron, and
Southern New Hampshire do no require course enrollments.
Kaplan University ---
http://en.wikipedia.org/wiki/Kaplan_University
"For-Profit Giant Starts Competency-Based ‘Open College’," by Goldie
Blumenstyk, Chronicle of Higher Education, October 3, 2014 ---
http://chronicle.com/article/For-Profit-Giant-Starts/149227/?cid=at&utm_source=at&utm_medium=en
One of the biggest for-profit college companies in the country is creating
an "Open College" aimed at adults who may already have skills and experience
that could qualify for college credits.
The new venture, from Kaplan Higher Education, will include free online
services and personalized mentoring to help people identify and organize
prior experience and skills that could count toward a degree or move them
closer to a new career.
It will also provide fee-based services, under a subscription model, that
will offer ways for students to satisfy the remaining requirements for a
bachelor of science degree in professional studies from Kaplan University.
Students who enroll in Open College could take courses at Kaplan University
or from other sources, such as the MOOC provider edX or the Saylor
Foundation, as long as the students ultimately meet the course outcomes set
by Open College.
Kaplan Higher Education, part of the Graham Holdings Company, hopes to begin
enrolling its first Open College@KU students on Monday.
The Kaplan offerings respond to a growing interest in competency-based
education and a concern among many higher-education experts about the
absence of tools to help people, especially adults, find more economical and
efficient pathways to degrees and careers.
Other ventures, including the movement around "badges," are
trying to develop ways to take students’ informally acquired knowledge and
"certify it, organize it, and credential it," notes Mark S. Schneider, a
vice president at the American Institutes for Research who studies the
earnings of college graduates. The Kaplan venture is "touching a need that
everybody recognizes," he says, but whether it can actually execute the idea
remains to be seen.
Open College will
not participate in federal student-aid programs. But company officials say
it will nonetheless offer an "affordable" path to a college degree through
its use of assessments that give credit for prior learning and the
self-paced program.
With enrollment subscription costs of $195 a month, charges of $100 per
assessment for each of the 35 course equivalents needed to earn credits
toward a degree, and a $371-per-credit charge for a final six-credit
capstone course, a student entering with no credits who pursued the program
for 48 straight months could earn a bachelor’s degree for about $15,000.
Students who earned credits based on their prior experience would end up
paying less than that.
Officials expect that such students would typically enroll with about 60
credits, take 24 to 30 months to complete a degree, and pay about $9,500.
'A Good Algorithm'
Mr. Schneider says the success of the venture, for Kaplan and for students,
depends on the quality of the counseling and the costs of providing it. And
that will depend on how much of it is based on online templates or
personalized service.
"Obviously, if you have a good algorithm, then your price is really low," he
says. And if Kaplan has that, he says, "more power to them. But if it’s
human interaction, how can you do it for $195 a month?"
Competency-based degrees are not new, even in the
for-profit-college sector. Capella University’s year-old FlexPath
program, for
example, now offers six degrees and enrolls abut 100 graduate and
undergraduate students per quarter.
Peter Smith, president of the new Kaplan venture, says
the free features of Open College set it apart. For example, at the
nonprofit Western
Governors University, he
says, "you have to enroll" before you can know where you stand. "They do not
help you figure all the stuff out prior to enrollment."
Mr. Smith is no stranger to higher education. Before joining Kaplan seven
years ago, he was founding president of the Community College of Vermont,
founding president of California State University-Monterey Bay, and a member
of Congress. He says the offerings will help students who have accumulated
learning but don’t know "how to turn it into something valuable to
themselves."
The venture is not Kaplan’s first foray into
prior-learning assessments. In 2011 the company announced
a service it
then called KNEXT that would, for a fee, advise students on preparing
portfolios that demonstrated their expertise and qualifications and then
submitting them for credit evaluation. But that effort didn’t catch on. In
fact, Mr. Smith says, only two students outside of Kaplan University used
the $1,500 service.
But within Kaplan University, thousands of students took advantage of a
variant of that service in the form of a course. Kaplan Higher Education
also created a free online pilot version, called the Learning Recognition
Course, that it has been testing for the past year.
Mr. Smith says students who took the free online course or Kaplan's
instructor-led version used it to turn their experiences into something of
value: college credit. On average, the 100 or so students who took the
online course requested 50 quarter-course credits and were awarded an
average of 37. Those at Kaplan sought an average of 36 credits and were
awarded 27.
A Gateway
Now that the online course will be a gateway to Open College@KU, students
can take it at no cost to learn how to develop their expertise into a
portfolio. Then, if they later elect to have their experience and skills
assessed for credit, they will have several options: find another college
willing to evaluate their portfolio for credit; pay NEXT (as KNEXT has since
been renamed) to do an assessment for credit; enroll in Kaplan University or
its Mount Washington College, which will waive the fees for assessing the
credits; or enroll in the new Open College, which will assess the credits as
part of the basic subscription price.
Continued in article
Jensen Comment
There are several ways to spot diploma mills.
- Graduates can purchase their degrees with virtually no academic
standards, sometimes no courses or courses where everybody passes. Other
times there are course requirements or competency-based testing requirements
where everybody gets a diploma as long as they pay to graduate.
- Every applicant is accepted without rigorous remedial requirements to
level the playing field of fully-matriculated students.
- Generous credits are given for "life experience." Every applicant has
life experience of some sort such that this type of college credit granting
is more of a marketing ploy to get more paying students.
I don't think the owner of Kaplan University will let Kaplan University
become a diploma mill, although there have been some academic scandals in the
past before The Washington Post, that owns Kaplan University, was sold to
the billionaire founder of giant online retailer Amazon --- Jeff Bezos. An
enormous academic scandal is publicity that I'm sure Bezos will desperately try
to avoid. Amazon depends too much on the legitimate academic market.
The essence of this new Kaplan open-enrollment program is to give credit for
"life experience" based upon competency-based testing. As the saying goes ---
the Devil is in the details. In this
case the details surround the rigor that makes graduates of the program
competitive with graduates of respected colleges and universities in the
Academy. Only the Ivy League universities can get away with courses where
everybody gets an A grade. The reason is that the admission criteria allow for
extreme grade inflation in these prestigious universities. Kaplan University is
a long way from the Ivy League.
Kaplan University is not the only for-profit university with competency-based
testing course credits. Before now, the Department of Education approved the
competency-based testing programs at Capella University. Similarly, such
programs have been approved in non-profit universities like the University of
Wisconsin, the University of Akron, and the University of Southern New
Hampshire. The Kaplan Program, however, appears to be more personalized in terms
of services other than mere administration of competency-based examinations.
I don't think any of these programs are intended for the dropouts or
graduates of ghetto schools in the largest cities of the USA. It's too expensive
and complicated to prepare unmotivated students for college who cannot even read
properly or do basic arithmetic. The competency-based programs are aimed at
highly motivated self-learners at higher levels of competency. For example, such
programs might seek out top high school graduates who who dropped out of college
along the way for a variety of possible reasons, including unintended
parenthood. It might eventually even include college graduates trying to prepare
for certain vocations like nursing, pharmacy, or accounting.
As I said above, the Devil is in the details --- meaning that the Devil is in
the competency-based testing rigor.
"College, on Your Own Competency-based education can help motivated
students. But critics say it’s no panacea," by Dan Barrett, Chronicle of
Higher Education, July 14, 2014 ---
http://chronicle.com/article/College-on-Your-Own/147659/?cid=wb&utm_source=wb&utm_medium=en
Jensen Comment
Several major universities like the University of Wisconsin and the University
of Akron are now providing competency-based testing for college credit. Western
Governors University for years is a bit different. It grades on the basis of
competency-based testing but also requires that students enroll in courses.
Years and years ago the University of Chicago allowed students to take final
examinations for credit even though the students were not enrolled in courses.
Like it or not we seem to be going full circle.
Mathematics Assessment: A Video Library ---
http://www.learner.org/resources/series31.html
Western Governors University ---
http://en.wikipedia.org/wiki/Western_Governors_University
"In Boost to Competency Model, Western Governors U. Gets Top Marks in
Teacher Ed," by Dan Barrett, Chronicle of Higher Education, June 17, 2014
---
http://chronicle.com/article/In-Boost-to-Competency-Model/147179/?cid=at&utm_source=at&utm_medium=en
"Competency-Based Degrees: Coming Soon to a Campus Near You," by Joel
Shapiro, Chronicle of Higher Education, February 17, 2014 ---
http://chronicle.com/article/Competency-Based-Degrees-/144769/?cid=cr&utm_source=cr&utm_medium=en
Has distance education significantly affected the
business and teaching models of higher education? Certainly. Is it today’s
biggest disrupter of the higher-education industry? Not quite. In fact, the
greatest risk to traditional higher education as we know it may be posed by
competency-based education models.
Competency-based programs allow students to gain
academic credit by demonstrating academic competence through a combination
of assessment and documentation of experience. The model is already used by
institutions including Western Governors University, Southern New Hampshire
University, Excelsior College, and others, and is a recent addition to the
University of Wisconsin system.
Traditional educators often find competency
programs alarming—and understandably so. Earning college credit by virtue of
life experience runs afoul of classroom experience, which many educators
believe to be sacred. As a colleague recently said, "Life is not college.
Life is what prepares you for college."
In fact, traditional educators should be alarmed.
If more institutions gravitate toward competency-based models, more and more
students will earn degrees from institutions at which they take few courses
and perhaps interact minimally with professors. Then what will a college
degree mean?
It may no longer mean that a student has taken
predetermined required and elective courses taught by approved faculty
members. Rather, it would mean that a student has demonstrated a defined set
of proficiencies and mastery of knowledge and content.
Competency models recognize the value of
experiential learning, in which students can develop and hone skill sets in
real-world contexts. For instance, a student with a background in web design
may be able to provide an institution with a portfolio that demonstrates
mastery of computer coding or digital design. If coding or digital design is
a discipline in which the institution gives credit, and the mastery
demonstrated is sufficiently similar to that achieved in the classroom, then
the institution may grant credit based on that portfolio.
The logic of competency-based credit is compelling.
After all, colleges and universities hire most people to teach so that
students learn. If students can achieve the desired learning in other ways,
then why not provide them with the same credential as those who sat in the
traditional classrooms with the traditional faculty members?
Additionally, the competency-based model, so often
cast aside by traditional institutions, already exists within their walls.
Not only do many colleges give credit for
real-world learning through
(sometimes mandatory) internships, but a version of the competency model has
long been part of traditional assessment practices.
Most professors grade students on the basis of
their performance on particular assignments, such as papers, tests, and
projects. If a student’s final paper reflects a sufficient degree of
sophistication and mastery, then the professor gives the student a passing
grade, thus conferring credit. But how much can the professor really know
about how the student learned the material? If the end is achieved, how much
do the means matter?
In primary and secondary education, much is made of
measuring students’ growth. A successful teacher moves a student from Point
A to Point B. The greater the difference between A and B, arguably, the more
effective the teacher. But in higher education, rarely is any effort made to
formally assess student growth. Rather, professors typically give grades
based on final performance, regardless of students’ starting point. In the
classroom, competency models rule, even at traditional institutions.
The primary weakness of competency models, however,
is that they can be only as good as the assessment mechanisms they employ,
and, unfortunately, no assessment can be a perfect proxy for deep and
meaningful learning. Certainly, great education isn’t just about content. It
challenges students to consider others’ viewpoints, provides conflicting
information, and forces students to reconcile, set priorities, and choose.
In the best cases, it engenders a growth of intellect and curiosity that is
not easily definable.
Higher-end learning remains the defining value
proposition of great teaching within a formal classroom setting. But because
it is exceedingly hard to assess, it cannot easily be incorporated into
competency models.
Nonetheless, competency models will make
significant headway at the growing number of institutions that offer
skill-based programs with clearly delineated and easily assessed learning
outcomes. They will also appeal to students who want to save time and money
by getting credit applied to past experience. Institutions that serve these
students will thus find competency models to be a competitive advantage.
Meanwhile, institutions that are unwilling or
unable to incorporate elements of a competency model will be forced to
defend the value of learning that cannot be easily assessed and
demonstrated. That will be a hard message to communicate and sell,
especially given that students with mastery of applied and technical skill
sets tend to be rewarded with jobs upon graduation. Additionally,
noncompetency tuition will almost certainly rise relative to
competency-based credit models, which require less instruction and thus can
be delivered at lower cost.
The marketplace rarely reacts well to perceived low
marginal benefit at high marginal price.
Continued in article
Bob Jensen's threads on competency-based assessment and assessment of deep
understanding:
Concept Knowledge and Assessment of Deep Understanding ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
"The Baloney Detection Kit: Carl Sagan’s Rules for Bullshit-Busting and
Critical Thinking," by Maria Popova, Brain Pickings, January 3, 2014
---
http://www.brainpickings.org/index.php/2014/01/03/baloney-detection-kit-carl-sagan/
Carl Sagan was many things — a
cosmic sage,
voracious reader,
hopeless romantic, and
brilliant philosopher. But above all, he endures
as our era’s greatest patron saint of reason and common sense, a master of
the vital balance between skepticism and openness. In
The Demon-Haunted World: Science as a Candle in the Dark (public
library) — the same indispensable volume that
gave us Sagan’s timeless meditation on
science and spirituality,
published mere months before his death in 1996 — Sagan shares his secret to
upholding the rites of reason, even in the face of society’s most shameless
untruths and outrageous propaganda.
In a chapter titled “The Fine Art of
Baloney Detection,” Sagan reflects on the many types of deception
to which we’re susceptible — from psychics to religious zealotry to paid
product endorsements by scientists, which he held in especially low regard,
noting that they “betray contempt for the intelligence of their customers”
and “introduce an insidious corruption of popular attitudes about scientific
objectivity.” (Cue in
PBS’s Joe Hanson on how to read science news.) But
rather than preaching from the ivory tower of self-righteousness, Sagan
approaches the subject from the most vulnerable of places — having just lost
both of his parents, he reflects on the all too human allure of promises of
supernatural reunions in the afterlife, reminding us that falling for such
fictions doesn’t make us stupid or bad people, but simply means that we need
to equip ourselves with the right tools against them.
Continued in article
Concept Knowledge and Assessment of Deep Understanding ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
"The Degree Is Doomed,"by Michael Staton, Harvard Business Review
Blog, January 9, 2014 ---
http://blogs.hbr.org/2014/01/the-degree-is-doomed/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+harvardbusiness+%28HBR.org%29&cm_ite=DailyAlert-010914+%281%29&cm_lm=sp%3Arjensen%40trinity.edu&cm_ven=Spop-Email
New Approach to Transfer," by Paul Fain, Inside Higher Ed,
January 9, 2013 ---
http://www.insidehighered.com/news/2014/01/09/wiche-transfer-passport-based-proficiency-rather-credits
Competency-Based Programs (where instructors do not
assign the grades) Can Work Well But Do Not Always Work Well
A Research Report
"Competency-Based Degree Programs in the U.S. Postsecondary Credentials for
Measurable Student Learning and Performance," Council on Adult and Experiential
Learning," 2012 ---
http://www.cael.org/pdfs/2012_CompetencyBasedPrograms
Executive Summary
As our economy evolves, there is growing recognition of the importance of an
educated workforce. A key challenge is how to help more people, particularly
adults, succeed at the postsecondary level and earn degrees. However,
promoting degree completion is not our only challenge. Today our higher
education system is facing a crisis regarding its perceived quality. One
model for improving quality is competency-based education, in which an
institution clearly defines the specific competencies expected of its
graduates. This paper examines the current state of competency-based
postsecondary education in the U.S., profiling the various types of
competency-based, or competency-focused, models that currently exist, the
extent to which these programs assess for student competencies or learning
outcomes, and the extent to which these programs operate outside of a
credit-based system. These programs can help inform other institutions
interested in developing a stronger focus on competencies, whether by
demonstrating the possibilities of high quality programs or by facilitating
the recognition of learning.
Jensen Comment
The good news is that competency-based grades virtually put an end to games
played by students to influence their grades from their instructors. Instead
they may be more demanding on their instructors to do a better job on content
rather than being their buddies. Competency-based grading goes a long way to
leveling the playing field.
However, a competency-based system can be dysfunctional to motivation and
self-esteem. One of my old girl friends at the University of Denver was called
in by her physical chemistry professor who made a deal with her. If she would
change her major from chemistry he agreed to give her a C grade. I honestly
think an F grade would've discouraged her to a point where she dropped out of
college. Instead she changed to DU's nursing school and flourished with a 3.3
gpa. Purportedly she became an outstanding nurse in a long and very satisfying
career that didn't require much aptitude for physical chemistry. For some reason
she was better in organic chemistry.
I can't imagine teaching a case course in the Harvard Business School where
the course grades are entirely based on a final examination that depends zero
upon what the course instructor feels was "class participation." There's not
much incentive to participate in class discussions if the those discussions
impact some way upon grades and instructor evaluations (such as evaluations for
graduate school and employment).
Much of what is learned in a course or an entire college curriculum cannot be
measured in test grades and term paper grading (where the readers of the term
papers are not the instructors).
In spite of all the worries about competency-based grading and student
evaluations, there are circumstances where competency-based education inspires
terrrific learning experiences.
Competency-Based Learning ---
http://en.wikipedia.org/wiki/Competency-based_learning
The University of Northern Arizona Offers a Dual Transcript Option, One of
Which is Competency-Based
"Competency-Based Transcripts," by Paul Fain, Inside Higher Ed,
August 9, 2013 ---
http://www.insidehighered.com/news/2013/08/09/northern-arizona-universitys-new-competency-based-degrees-and-transcripts
Jensen Comment
This program differs from the competency-based programs at the University of
Wisconsin, the University of Akron, and Southern New Hampshire University in
that students must sign up for online courses at Northern Arizona before
becoming eligible for the competency-based transcript. It differs from Western
Governors University in that there are two transcripts rather than just a
competency-based transcript for online courses.
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
"The Gates Effect The Bill & Melinda Gates Foundation has spent
$472-million (so far) on higher education. Why many in academe are not writing
thank-you notes," by Marc Parry, Kelly Field, and Beckie Supiano.
Chronicle of Higher Education, July 14, 2014 ---
http://chronicle.com/article/The-Gates-Effect/140323/
Jensen Comment
This is a long article filled with more opinion than fact. One suspects that
faculty unions had the major impact.
Obviously, distance education with large or small classes and
competency-based examinations are poor choices for the learning challenged and
unmotivated learners that need more hand holding and inspiration to learn.
On the other had, the article assumes ipso facto that traditional colleges
are doing a great job educating. The fact of the matter is that the best thing
traditional colleges are doing best is inflating grades for lazy students ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
The other misleading thing thing about the article is that competency-based
testing leads to watered down courses. The fact of the matter is that many
traditional teachers would shake in their boots if their grade-inflated pampered
students had to take competency based examinations --- which is why students
tend do quite poorly on the MCAT competency-based examinations for medical
school after getting mostly A grades in their science courses. This is aspiring
teachers do so poorly on teacher certification examinations that are hardly
rocket science.
This is mostly a paranoia article patting the status quo in higher education
a pat on the back. If Bill Gates wants better reviews in the Chronicle he should
simply give the money to the AAUP
July 19, 2013 message from Glen Gray
The follow is the lead to an article that appeared
in today’s L.A. Times…
“San Jose State University is suspending a
highly touted collaboration with online provider Udacity to offer
low-cost, for-credit online courses after finding that more than half of
the students failed to pass the classes, officials said Thursday.”
Udacity Experiment at San Jose State Suspended After 56% to 76% of Students
Fail Final Exams ---
http://www.openculture.com/2013/07/udacity-experiment-at-san-jose-state-suspended.html
Are competency-based MOOCs tougher for students than traditional courses?
"Udacity Project on 'Pause'," by Ry Rivard. Chronicle of Higher
Education,
San Jose State's experiment with MOOC provider
attracted enormous attention when it was launched. But students didn't do as
well as they did in traditional classes.
"A University's Offer of Credit for a MOOC Gets No Takers," by Steve
Kolowich, Chronicle of Higher Education, July 8, 2013 ---
http://chronicle.com/article/A-Universitys-Offer-of-Credit/140131/?cid=wc&utm_source=wc&utm_medium=en
Jensen Comment
With nationwide median grades being around A- in live classrooms, it may well be
that students just fear that the same loose grading standards will not be
applied to competency-based grading in a MOOC ---
http://www.gradeinflation.com/
Students cannot brown nose a MOOC for a higher grade ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
There may also be problems transferring these MOOC credits to other
universities. There are many universities who do not allow transfer credit for
distance education courses in general, although this is somewhat hard to enforce
when major universities do not distinguish (on transcripts) what sections of
courses were taken onsite versus online. In may instances students have a choice
as to whether to take onsite sections or online sections of the same course. But
when all sections are only available via distance education other universities
may deny transfer credits. In accountancy, some state societies of CPAs, such as
in Texas, limit the number of distance education courses allowed for permission
to take the CPA examination.
Also it could be that this MOOC alternative just was not publicized enough to
reach its potential market.
Bob Jensen's threads on the controversial history of the OKI and the MOOCs
---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Question
What is the difference between traditional competency-based course credits and
"decoupled" competency-based course credits?
Answer
In traditional competency-based systems an instructor either does not assign
course grades or does so based solely on examinations that cannot be linked to
particular students in a way where knowing a student can affect the final grade.
Course grades are generally not influenced by class discussions (onsite or in
online chat rooms), homework, term papers, course projects, team performance,
etc. In many instances the instructors do not even prepare the examinations that
determine competency-based grades.
Western Governors University ---
http://en.wikipedia.org/wiki/Western_Governors_University
WGU was one of the universities in modern times (since 1997) to offer fully
accredited online courses using a competency-based grading system. However,
students must participate in WGU and do class assignments for courses before
they can take the competency-based examinations.
Southern New Hampshire University (a private onsite university that is not
funded by the State of New Hampshire) ---
http://en.wikipedia.org/wiki/Southern_New_Hampshire_University
Capella University ---
http://www.capella.edu/
Kentucky Community and Technical College System ---
http://www.kctcs.edu/
"Credit Without Teaching," by Paul Fain, Inside Higher Ed,
April 22, 2013 ---
http://www.insidehighered.com/news/2013/04/22/competency-based-educations-newest-form-creates-promise-and-questions
Earlier this year Capella University and the new
College for America began enrolling hundreds of students in academic
programs without courses, teaching professors, grades, deadlines or credit
hour requirements, but with a path to genuine college credit.
The two institutions are among a
growing number that are giving competency-based
education a try, including 25 or so nonprofit institutions. Notable examples
include
Western Governors University and the
Kentucky Community and Technical College System.
These programs are typically online, and allow
students to progress at their own pace without formal course material. They
can earn credit by successfully completing assessments that prove their
mastery in predetermined competencies or tasks -- maybe writing in a
business setting or using a spreadsheet to perform calculations.
College for America and a small pilot program at
Capella go a step further than the others, however, by severing any link to
the
credit hour standard.
This approach is called “direct assessment.” Other competency-based programs
track learning back to
seat time under the
credit hour, which assumes one hour of instruction and three hours of
coursework per week. (For more details from College for America, click
here.)
Continued in article
In "decoupled" course credit systems, a university that usually offers
competency-based courses where class attendance or online course participation
is not required. Students can learn the material from any sources, including
free online learning modules, before signing up to take the competency-based
examinations. Sometimes more than one "progress" competency-based examination
may be required. But no particular course is required before taking any
competency-based examination.
Decoupled systems become a lot like the Uniform CPA Examination where there
are multiple parts of the examination that may be passed in stages or passed in
one computer-based sitting.
Southern New Hampshire University (a private onsite university that is not
funded by the State of New Hampshire) ---
http://en.wikipedia.org/wiki/Southern_New_Hampshire_University
SNHU claims to be the first university to decouple courses from
competency-based examinations. However, I'm not certain that his claim is true
since the University of Wisconsin System may have been the first to offer some
decoupled competency-based degree programs..The University of Akron now has some
similar alternatives.
Wisconsin System's Competency-Based Degrees as of November 28, 2012
---
http://www.wisconsin.edu/news/2012/r121128.htm
It is expected that students seeking decoupled competency-based credits will
sign up for learning modules from various free learning systems.
Listing of Sites for Free Courses and Learning Modules (unlike certificates,
transferrable credits are never free) ---
http://www.opencolleges.edu.au/informed/features/free-online-courses-50-sites-to-get-educated-for-free/
"Competency-Based Education Advances With U.S. Approval of Program,"
by Marc Parry, Chronicle of Higher Education, April 18, 2013 ---
Click Here
http://chronicle.com/blogs/wiredcampus/u-s-education-department-gives-a-boost-to-competency-based-education/43439?cid=wc&utm_source=wc&utm_medium=en
Last month the U.S. Education Department sent a
message to colleges: Financial aid may be awarded
based on students’ mastery of “competencies” rather than their accumulation
of credits. That has major ramifications for institutions hoping to create
new education models that don’t revolve around the amount of time that
students spend in class.
Now one of those models has cleared a major hurdle.
The Education Department has approved the eligibility of Southern New
Hampshire University to receive federal financial aid for students enrolled
in a new, self-paced online program called College
for America, the private, nonprofit
university has announced.
Southern New Hampshire bills its College for
America program as “the first degree program to completely decouple from the
credit hour.” Unlike the typical experience in which students advance by
completing semester-long, multicredit courses, students in College for
America have no courses or traditional professors. These working-adult
students make progress toward an associate degree by demonstrating mastery
of 120 competencies. Competencies are phrased as “can do” statements, such
as “can use logic, reasoning, and analysis to address a business problem” or
“can analyze works of art in terms of their historical and cultural
contexts.”
Students show mastery of skills by completing
tasks. In one task, for example, students are asked to study potential works
of art for a museum exhibit about the changing portrayal of human bodies
throughout history. To guide the students, Southern New Hampshire points
them to a series of free online resources, such as
“Smarthistory” videos presented by Khan Academy.
Students must summarize what they’ve found by creating a PowerPoint
presentation that could be delivered to a museum director.
Completed tasks are shipped out for evaluation to a
pool of part-time adjunct professors, who quickly assess the work and help
students understand what they need to do to improve. Southern New Hampshire
also assigns “coaches” to students to help them establish their goals and
pace. In addition, the university asks students to pick someone they know as
an “accountability partner” who checks in with them and nudges them along.
Students gain access to the program through their
employers. Several companies have set up partnerships with Southern New
Hampshire to date, including Anthem Blue Cross Blue Shield and ConAgra
Foods.
The Education Department is grappling with how to
promote innovation while preventing financial-aid abuses. Southern New
Hampshire, whose $2,500-a-year program was established last year with
support from the Bill & Melinda Gates Foundation, has served as a guinea pig
in that process. But other institutions are lining up behind it, hoping to
obtain financial aid for programs that don’t hinge on credit hours.
Continued in article
Jensen Comment
In many ways this USNH program reduces the costs of student admission and of
offering remedial programs to get students up to speed to enroll in USNH courses
on campus.
But there are enormous drawbacks
In some courses the most important learning comes from student interactions,
team projects, and most importantly case discussions. In the Harvard Business
School, master case teachers often cannot predict the serendipitous way each
class will proceed since the way it proceeds often depends upon comments made in
class by students. In some courses the most important learning takes place in
research projects. How do you have a competency-based speech course?
Time and time again, CPA firms have learned that the best employees are not
always medal winners on the CPA examination. For example, years and years ago a
medal winner on occasion only took correspondence courses. And in some of those
instances the medal winner did not perform well on the job in part because the
interactive and team skills were lacking that in most instances are part of
onsite and online education.
Note that distance education courses that are well done require student
interactions and often team projects. It is not necessary to acquire such skills
face-to-face. It is necessary, however, to require such interactions in a great
distance education course.
A USNH College for America accounting graduate may not be allowed to sit for
the CPA examination in some states, especially Texas. Texas requires a least 15
credits be taken onsite face-to-face in traditional courses on campus. Actually
I cannot find where an accounting degree is even available from the USNH College
for America degree programs.
"Green Light for Competency-Based Ed at Capella," Inside Higher Ed,
May 23, 2013 ---
http://www.insidehighered.com/quicktakes/2013/05/23/green-light-competency-based-ed-capella
Jensen Comment
I anticipate that a lot of for-profit universities will be following Capella's
lead on this. However, the in recent years the lead has been taken by public
universities like Western Governor's University, the University of Wisconsin,
and the University of Akron. Also early non-profit competency-based universities
include the University of Southern New Hampshire and the Chartered School of
Accouancy masters program in Western Canada.
Wisconsin System's Competency-Based Degrees as of November 28, 2012
---
http://www.wisconsin.edu/news/2012/r121128.htm
"Study: Little Difference in Learning in Online and In-Class Science
Courses," Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses
A
study in Colorado has found little difference in
the learning of students in online or in-person introductory science
courses. The study tracked community college students who took science
courses online and in traditional classes, and who then went on to four-year
universities in the state. Upon transferring, the students in the two groups
performed equally well. Some science faculty members have expressed
skepticism about the ability of online students in science, due to the lack
of group laboratory opportunities, but the programs in Colorado work with
companies to provide home kits so that online students can have a lab
experience.
Jensen Comment
Firstly, note that online courses are not necessarily mass education (MOOC)
styled courses. The student-student and student-faculty interactions can be
greater online than onsite. For example, my daughter's introductory chemistry
class at the University of Texas had over 600 students. On the date of the final
examination he'd never met her and had zero control over her final grade. On the
other hand, her microbiology instructor in a graduate course at the University
of Maine became her husband over 20 years ago.
Another factor is networking. For example, Harvard Business School students
meeting face-to-face in courses bond in life-long networks that may be stronger
than for students who've never established networks via classes, dining halls,
volley ball games, softball games, rowing on the Charles River, etc. There's
more to lerning than is typically tested in competency examinations.
My point is that there are many externalities to both onsite and online
learning. And concluding that there's "little difference in learning" depends
upon what you mean by learning. The SCALE experiments at the University of
Illinois found that students having the same instructor tended to do slightly
better than onsite students. This is partly because there are fewer logistical
time wasters in online learning. The effect becomes larger for off-campus
students where commuting time (as in Mexico City) can take hours going to and
from campus.
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
"Innovations in Higher Education? Hah! College
leaders need to move beyond talking about transformation before it's too late,"
by Ann Kirschner, Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
. . .
(Conclusion)
Some of the most interesting work begins in the academy but grows beyond it.
"Scale" is not an academic value—but it should be. Most measures of prestige
in higher education are based on exclusivity; the more prestigious the
college, the larger the percentage of applicants it turns away. Consider the
nonprofit Khan Academy, with its library of more than 3,000 education videos
and materials, where I finally learned just a little about calculus. In the
last 18 months, Khan had 41 million visits in the United States alone. It is
using the vast data from that audience to improve its platform and grow
still larger. TED, the nonprofit devoted to spreading ideas, just launched
TED-Ed, which uses university faculty from around the world to create
compelling videos on everything from "How Vast Is the Universe?" to "How
Pandemics Spread." Call it Khan Academy for grown-ups. The Stanford
University professor Sebastian Thrun's free course in artificial
intelligence drew 160,000 students in more than 190 countries. No surprise,
the venture capitalists have come a-calling, and they are backing
educational startups like Udemy and Udacity.
All of those are signposts to a future where
competency-based credentials may someday compete with a degree.
At this point, if you are affiliated with an Ivy
League institution, you'll be tempted to guffaw, harrumph, and otherwise
dismiss the idea that anyone would ever abandon your institution for such
ridiculous new pathways to learning. You're probably right. Most
institutions are not so lucky. How long will it take for change to affect
higher education in major ways? Just my crystal ball, but I would expect
that institutions without significant endowments will be forced to change by
2020. By 2025, the places left untouched will be few and far between.
Here's the saddest fact of all: It is those leading
private institutions that should be using their endowments and moral
authority to invest in new solutions and to proselytize for experimentation
and change, motivated not by survival but by the privilege of securing the
future of American higher education.
The stakes are high. "So let me put colleges and
universities on notice," President Obama said in his recent State of the
Union address. "If you can't stop tuition from going up, the funding you get
from taxpayers will go down." Because of the academy's inability to police
itself and improve graduation rates, and because student debt is an
expedient political issue, the Obama administration recently threatened to
tie colleges' eligibility for campus-based aid programs to institutions'
success in improving affordability and value for students.
Whether the president's threat is fair or not, it
will not transform higher education. Change only happens on the ground.
Despite all the reasons to be gloomy, however, there is room for optimism.
The American university, the place where new ideas are born and lives are
transformed, will eventually focus that lens of innovation upon itself. It's
just a matter of time.
Jensen Comment
This a long and important article for all educators to carefully read. Onsite
colleges have always served many purposes, but one purpose they never served is
to be knowledge fueling stations where students go to fill their tanks. At best
colleges put a shot glass of fuel in a tanks with unknown capacities.
Students go to an onsite college for many reasons other than to put fuel in
their knowledge tanks. The go to live and work in relatively safe transitional
environments between home and the mean streets. They go to mature, socialize, to
mate, drink, laugh, leap over hurdles societies place in front of career paths,
etc. The problem in the United States is that college onsite living and
education have become relatively expensive luxuries. Students must now make more
painful decisions as to how much to impoverish their parents and how deeply go
into debt.
I have a granddaughter 22 years old majoring in pharmacy (six year program).
She will pay off her student loans before she's 50 years old if she's lucky.
Some older students who've not been able to pay off their loans are becoming
worried that the Social Security Administration will garnish their retirement
Social Security monthly payments for unpaid student loans.
We've always known that colleges are not necessary places for learning and
scholarship. Until 43 years ago (when the Internet was born) private and public
libraries were pretty darn necessary for scholarship. Now the Internet provides
access to most known knowledge of the world. But becoming a scholar on the
Internet is relatively inefficient and overwhelming without the aid of
distillers of knowledge, which is where onsite and online college courses can
greatly add to efficiency of learning.
But college courses can be terribly disappointing as distillers of knowledge.
For one thing, grade inflation disgracefully watered down the amount of real
fuel in that shot glass of knowledge provided in a college course ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Grades rather than learning became the tickets to careers and graduate schools,
thereby, leading to street-smart cheating taking over for real learning
perspiration ---
http://faculty.trinity.edu/rjensen/Plagiarism.htm
When 80% of Harvard's graduating class graduates cum laude, we no
longer identify which graduates are were the best scholars in their class.
Soon those graduates from Harvard, Florida A&M University, Capella
University, and those who learned on their own from free courses, video
lectures, and course materials on the Web will all face some sort of common
examinations (written and oral) of their competencies in specialties.
Competency testing will be the great leveler much like licensure examinations
such as the Bar Exam, the CPA exam, the CFA exam, etc. are graded on the basis
of what you know rather than where you learned what you know. It won't really
matter whether you paid a fortune to learn Bessel Functions onsite at MIT or for
free from the MITx online certificate program ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
If you are an educator or are becoming an educator, please read:
"Innovations in Higher Education? Hah! College leaders need to move beyond
talking about transformation before it's too late," by Ann Kirschner,
Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
At the University of Wisconsin
"Online Degree Program Lets Students Test Out of What They Already Know,"
by Angela Chen, June 20, 2012 ---
Click Here
http://chronicle.com/blogs/wiredcampus/online-degree-program-lets-students-test-out-of-what-they-already-know/37097?cid=wc&utm_source=wc&utm_medium=en
The University of Wisconsin plans to start a
“flexible degree” program online focused on allowing undergraduates to test
out of material they have mastered.
The new program, geared toward working adults with
some college education, operates under a “competency based” model, said
Raymond Cross, chancellor of the University of Wisconsin Colleges and
University of Wisconsin-Extension. This model is similar to the Advanced
Placement program, in which high-school students take AP tests to pass out
of college-level courses.
In the university’s new program, college courses
will be broken down into units. For example, a higher-level mathematics
class could include units such as linear algebra and trigonometry. Students
can then test out of certain units (instead of full courses) and spend time
learning only material that is new to them. Eventually, the units will build
into courses, and then a degree. The flexible-degree program and
traditional-degree program will have identical course requirements, and
since each flexible degree will be associated with a specific campus, the
student will receive a diploma from the originating campus and not from the
system.
“We’re trying to find ways to reduce the cost of
education,” Mr. Cross said. “Implicit in the model is the idea that you can
take lectures online from free sources—like Khan Academy and MITx—and
prepare yourself for the competency test. Then take the remaining courses
online at UW.”
The biggest challenge, he says, is determining how
to best test competency. Some units will require tests, while others may
require written papers or laboratory work. The difficulty of measuring
“competency’” for any unit will affect the program’s pricing structure,
which has not yet been determined.
The idea of competency-based credentials is common
in technical and health fields, Mr. Cross said, but it is rare at
traditional universities. The program is part of a push to encourage
Wisconsin’s 700,000 college dropouts to go back to a university.
“With higher ed now, people often have a piece or
two missing in their education, so we are responding to the changes in our
culture and helping them pull all these pieces together,” Mr. Cross said.
“Students already interface with a lot of different institutions and
different classes and professors, and this will help that process. I don’t
think this diminishes traditional higher ed at all. I think it’ll enhance
it.”
The first courses in the flexible-degree program
will be available starting in fall 2013. The university is still developing
exact degree specifications, Mr. Cross said. Likely degrees include business
management and information technology.
Bob Jensen's threads on distance education training and education
alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"Score One for the Robo-Tutors," by Steve Kolowich, Inside Higher
Ed, May 22, 2012 ---
http://www.insidehighered.com/news/2012/05/22/report-robots-stack-human-professors-teaching-intro-stats
Without diminishing learning outcomes, automated
teaching software can reduce the amount of time professors spend with
students and could substantially reduce the cost of instruction, according
to new research.
In experiments at six public universities, students
assigned randomly to statistics courses that relied heavily on
“machine-guided learning” software -- with reduced face time with
instructors -- did just as well, in less time, as their counterparts in
traditional, instructor-centric versions of the courses. This largely held
true regardless of the race, gender, age, enrollment status and family
background of the students.
The
study comes at a time when “smart” teaching
software is being
increasingly included in conversations about
redrawing the economics of higher education. Recent investments by
high-profile universities in “massively open online courses,” or MOOCs, has
elevated the notion that technology has reached a tipping point: with the
right design, an online education platform, under the direction of a single
professor, might be capable of delivering meaningful education to hundreds
of thousands of students at once.
The new research from the nonprofit organization
Ithaka was seeking to prove the viability of a less expansive application of
“machine-guided learning” than the new MOOCs are attempting -- though one
that nevertheless could have real implications for the costs of higher
education.
The study, called “Interactive Learning Online at
Public Universities,” involved students taking introductory statistics
courses at six (unnamed) public universities. A total of 605 students were
randomly assigned to take the course in a “hybrid” format: they met in
person with their instructors for one hour a week; otherwise, they worked
through lessons and exercises using an artificially intelligent learning
platform developed by learning scientists at Carnegie Mellon University’s
Open Learning Initiative.
Researchers compared these students against their
peers in the traditional-format courses, for which students met with a live
instructor for three hours per week, using several measuring sticks: whether
they passed the course, their performance on a standardized test (the
Comprehensive Assessment of Statistics), and the final exam for the course,
which was the same for both sections of the course at each of the
universities.
The results will provoke science-fiction
doomsayers, and perhaps some higher-ed traditionalists. “Our results
indicate that hybrid-format students took about one-quarter less time to
achieve essentially the same learning outcomes as traditional-format
students,” report the Ithaka researchers.
The robotic software did have disadvantages, the
researchers found. For one, students found it duller than listening to a
live instructor. Some felt as though they had learned less, even if they
scored just as well on tests. Engaging students, such as professors might by
sprinkling their lectures with personal anecdotes and entertaining asides,
remains one area where humans have the upper hand.
But on straight teaching the machines were judged
to be as effective, and more efficient, than their personality-having
counterparts.
It is
not the first time the software used in the
experiment, developed over the last five years or so by Carnegie Mellon’s
Open Learning Initiative, has been proven capable of teaching students
statistics in less time than a traditional course while maintaining learning
outcomes. So far that research has failed to persuade many traditional
institutions to deploy the software -- ostensibly for fear of shortchanging
students and alienating faculty with what is liable to be seen as an attempt
to use technology as a smokescreen for draconian personnel cuts.
But the authors of the new report, led by William
G. Bowen, the former president of Princeton University, hope their study --
which is the largest and perhaps the most rigorous to date on the
effectiveness of machine-guided learning -- will change minds.
“As several leaders of higher education made clear
to us in preliminary conversations, absent real evidence about learning
outcomes there is no possibility of persuading most traditional colleges and
universities, and especially those regarded as thought leaders, to push hard
for the introduction of [machine-guided] instruction” on their campuses.
Continued in article
"‘Free-Range Learners’: Study Opens Window
Into How Students Hunt for Educational Content Online," by Marc Parry,
Chronicle of Higher Education, April 25, 2012 ---
Click Here
http://chronicle.com/blogs/wiredcampus/free-range-learners-study-opens-window-into-how-students-hunt-for-educational-content-online/36137?sid=wc&utm_source=wc&utm_medium=en
Bob Jensen's threads on Tools and Tricks of the Trade are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
Bob Jensen's threads on the explosion of distance education and training
---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
Outcomes Assessment
March 10, 2012 message from Penny Hanes
Can anyone point me to some good information on
course specific outcomes assessment in an accounting program?
Penny Hanes,
Associate Professor
Mercyhurst University
March 11. 2012 reply from Bob Jensen
Hi Penny,
Respondus has some testing software:
October 13, 2009 message from Richard Campbell
[campbell@RIO.EDU]
For anyone teaching online, this
software is a "must-have". They have released a new (4.0) version
with improved integration of multimedia. Below are some videos
(created in Camtasia) that demonstrate key features of the software.
http://www.respondus.com/
They have tightened up the
integration with publisher test banks.
Richard J. Campbell
mailto:campbell@rio.edu
Bob Jensen's threads for online assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm#Examinations
There are different levels that you can approach such a topic. Many are
based on the mastery learning theory of Benjamin Bloom ---
http://en.wikipedia.org/wiki/Benjamin_Bloom
The best known accounting course assessment experiment using Bloom's
Taxonomy, for an set of courses for an entire program, was funded by an
Accounting Education Change Commission (AECC) grant to a very fine
accounting program at Kansas State University. The results of this and the
other AECC experiences are available from the AAA (ISBN
0-86539-085-1) ---
http://aaahq.org/AECC/changegrant/cover.htm
The KSU outcomes are reported in Chapter 3 ---
http://aaahq.org/AECC/changegrant/chap3.htm
I think Lynn Thomas at KSU was one of the principal investigators.
Michael Krause, Le Moyne College, has conducted some AAA programs on
Bloom's Taxonomy assessment.
Susan A. Lynn, University of Baltimore, has done some of this assessment for
intermediate accounting.
Susan Wolcott, Canada's Chartered Accountancy School of Business, has delved
into critical thinking assessment in accounting courses
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
"A Measure of Education Is Put to the Test Results
of national exam will go public in 2012," by David Glenn, Chronicle of
Higher Education, September 19, 2010 ---
http://chronicle.com/article/A-Measure-of-Learning-Is-Put/124519/
You have 90 minutes to complete this test.
Here is your scenario: You are the assistant to a
provost who wants to measure the quality of your university's
general-education program. Your boss is considering adopting the Collegiate
Learning Assessment, or CLA, a national test that asks students to
demonstrate their ability to synthesize evidence and write persuasively.
The CLA is used at more than 400 colleges. Since
its debut a decade ago, it has been widely praised as a sophisticated
alternative to multiple-choice tests. At some colleges, its use has helped
spark sweeping changes in instruction and curriculum. And soon, many more of
the scores will be made public.
But skeptics say the test is too detached from the
substantive knowledge that students are actually expected to acquire. Others
say those who take the test have little motivation to do well, which makes
it tough to draw conclusions from their performance.
You may review the following documents:
Graphs of Collegiate Learning Assessment scores on
the University of Texas system's campuses over a four-year period. An essay
in which an assistant provost at a flagship campus describes her "grave
concerns" about using CLA scores to compare different colleges. A report in
which the CLA's creators reply to their critics. Your task: Write a two-page
memorandum to your boss that describes and analyzes the major arguments for
and against adopting the CLA. When you have finished, please hand your
materials to the proctor and leave the room quietly.
It is easy to see why the test format that you just
tasted has been so appealing to many people in higher education. The CLA is
a direct measure of skills, in contrast to surveys about how much time
students spend studying or how much they believe they have learned. And
unlike multiple-choice-based measures of learning, the CLA aspires to
capture a student's ability to make an argument and to interpret multiple
types of evidence. Those skills are close to the heart of a liberal-arts
education.
"Everything that No Child Left Behind signified
during the Bush administration—we operate 180 degrees away from that," says
Roger Benjamin, president of the Council for Aid to Education, which
developed and promotes the CLA. "We don't want this to be a high-stakes
test. We're putting a stake in the ground on classic liberal-arts issues.
I'm willing to rest my oar there. These core abilities, these higher-order
skills, are very important, and they're even more important in a knowledge
economy where everyone needs to deal with a surplus of information." Only an
essay test, like the CLA, he says, can really get at those skills.
Richard J. Shavelson, an educational psychologist
at Stanford University and one of the CLA's creators, makes a similar point
in his recent book, Measuring College Learning Responsibly: Accountability
in a New Era (Stanford University Press). "If you want to find out not only
whether a person knows the laws governing driving but also whether she can
actually drive a car," he writes, "don't judge her performance solely with a
multiple-choice test. Rather, also administer a behind-the-wheel driving
test."
"The CLA is really an authentic assessment
process," says Pedro Reyes, associate vice chancellor for academic planning
and assessment at the University of Texas system. "The Board of Regents here
saw that it would be an important test because it measures analytical
ability, problem-solving ability, critical thinking, and communication.
Those are the skills that you want every undergraduate to walk away with."
(Other large systems that have embraced the CLA include California State
University and the West Virginia system.)
One feature that appealed to Mr. Reyes and his
colleagues is that the CLA typically reports scores on a "value added"
basis, controlling for the scores that students earned on the SAT or ACT
while in high school. In raw terms, the highest scores in the Texas system
are at Austin and Dallas, the most-selective campuses. But in value-added
terms, it appears that students at San Antonio and El Paso make stronger
gains between their freshman and senior years.
The CLA's overseers, however, say they do not want
colleges to become overly concerned with bean-counting and comparing public
scores. Instead, they emphasize the ways in which colleges can use their own
CLA scores to experiment with improved models of instruction. Since 2007,
Mr. Benjamin's organization has invested heavily in "performance-task
academies," which encourage colleges to add CLA-style assignments to their
liberal-arts courses.
One campus that has gone down that road is the
University of Evansville, where first-year-experience courses have begun to
ask students to do performance tasks.
"We began by administering a retired CLA question,
a task that had to do with analyzing crime-reduction strategies," says Brian
R. Ernsting, an associate professor of biology at Evansville. "We talked
with the students about the modes of thinking that were involved there, how
to distinguish correlation from causation and anecdotes from data."
Similar things are happening at Pacific Lutheran
University. "Our psychology department is working on a performance task that
mirrors the CLA, but that also incorporates disciplinary content in
psychology," says Karen E. McConnell, director of assessment. "They're
planning to make that part of their senior capstone course."
How to Interpret the Scores? Mr. Ernsting and Ms.
McConnell are perfectly sincere about using CLA-style tasks to improve
instruction on their campuses. But at the same time, colleges have a less
high-minded motive for familiarizing students with the CLA style: It just
might improve their scores when it comes time to take the actual test.
And that matters, in turn, because by 2012, the CLA
scores of more than 100 colleges will be posted, for all the world to see,
on the "College Portrait" Web site of the Voluntary System of
Accountability, an effort by more than 300 public colleges and universities
to provide information about life and learning on their campuses. (Not all
of the colleges have adopted the CLA. Some use the Educational Testing
Service's "Proficiency Profile," and others use the ACT's Collegiate
Assessment of Academic Proficiency.)
A few dozen colleges in the voluntary project,
including those in the Texas system, have already made their test scores
public. But for most, the 2012 unveiling will be a first.
"If a college pays attention to learning and helps
students develop their skills—whether they do that by participating in our
programs or by doing things on their own—they probably should do better on
the CLA," says Marc Chun, a research scientist at the Council for Aid to
Education. Such improvements, he says, are the main point of the project.
But that still raises a question: If familiarizing
students with CLA-style tasks does raise their scores, then the CLA might
not be a pure, unmediated reflection of the full range of liberal-arts
skills. How exactly should the public interpret the scores of colleges that
do not use such training exercises?
Trudy W. Banta, a professor of higher education and
senior adviser to the chancellor for academic planning and evaluation at
Indiana University-Purdue University at Indianapolis, believes it is a
serious mistake to publicly release and compare scores on the test. There is
too much risk, she says, that policy makers and the public will misinterpret
the numbers.
"Standardized tests of generic skills—I'm not
talking about testing in the major—are so much a measure of what students
bring to college with them that there is very little variance left out of
which we might tease the effects of college," says Ms. Banta, who is a
longtime critic of the CLA. "There's just not enough variance there to make
comparative judgments about the comparative quality of institutions."
Compounding that problem, she says, is the fact
that most colleges do not use a true longitudinal model: That is, the
students who take the CLA in their first year do not take it again in their
senior year. The test's value-added model is therefore based on a
potentially apples-and-oranges comparison.
The test's creators reply that they have solved
that problem by doing separate controls for the baseline skills of freshman
test-takers and senior test-takers. That is, the freshman test-takers'
scores are assessed relative to their SAT and ACT scores, and so are senior
test-takers' scores. For that reason, colleges cannot game the test by
recruiting an academically weak pool of freshmen and a strong pool of
seniors.
Another concern is that students do not always have
much motivation to take the test seriously. That problem is especially
challenging with seniors, who are typically recruited to take the CLA toward
the end of their final semester, when they can already taste the graduation
champagne. Who at that stage of college wants to carefully write a 90-minute
essay that isn't required for any course?
For that reason, many colleges have had to come up
with elaborate incentives to get students to take the test at all. (See the
graphic below.) A recent study at Central Connecticut State University found
that students' scores were highly correlated with how long they had spent
writing their essays.
Take My Test — Please The Collegiate Learning
Assessment has been widely praised. But it involves an arduous 90 minutes of
essay writing. As a result, many colleges have resorted to incentives and
requirements to get students to take the test, and to take it seriously.
As of last week, there were some significant bugs
in the presentation of CLA scores on the College Portrait Web site. Of the
few dozen universities that had already chosen to publish CLA data on that
site, roughly a quarter of the reports appeared to include erroneous
descriptions of the year-to-year value-added scores. In some cases, the
errors made the universities' gains appear better than they actually were.
In other cases, they made them seem worse.
Seniors at California State University at
Bakersfield, for example, had CLA scores that were 155 points higher than
freshmen's, while the two cohorts' SAT scores were similar. The College
Portrait site said that the university's score gains were "below what would
be expected." The University of Missouri at St. Louis, meanwhile, had senior
scores that were only 64 points higher than those of freshmen, and those two
cohorts had identical ACT scores. But those score gains were reported as
"well above what would be expected."
"It doesn't make sense, what's presented here,"
said Stephen Klein, the CLA's director of research and development, when The
Chronicle pointed out such discrepancies. "This doesn't look like something
we would produce." Another official at the Council for Aid to Education
confirmed that at least three of the College Portrait reports were
incorrect, and said there appeared to be systematic problems with the site's
presentation of the data.
As The Chronicle went to press, the Voluntary
System of Accountability's executive director, Christine M. Keller, said her
office would identify and fix any errors. The forms that institutions fill
out for the College Portrait, she said, might be confusing for
administrators because they do not always mirror the way the CLA itself (and
the Collegiate Assessment of Academic Proficiency and ETS's Proficiency
Profile) present their official data. In any case, Ms. Keller said, a
revised version of the College Portrait site is scheduled to go online in
December.
It is clear that CLA scores do reflect some broad
properties of a college education. In a study for their forthcoming book,
Academically Adrift: Limited Learning on College Campuses (University of
Chicago Press), the sociologists Richard Arum and Josipa Roksa asked
students at 24 colleges to take the CLA during their first semester and then
again during their fourth. Their study was conducted before any significant
number of colleges began to consciously use CLA-style exercises in the
classroom.
The two authors found one clear pattern: Students'
CLA scores improved if they took courses that required a substantial amount
of reading and writing. Many students didn't take such courses, and their
CLA scores tended to stay flat.
The pattern was consistent across the ability
spectrum: Regardless of whether a student's CLA scores were generally low or
high, their scores were more likely to improve if they had taken demanding
college courses.
So there is at least one positive message in Mr.
Arum and Ms. Roksa's generally gloomy book. Colleges that make demands on
students can actually develop their skills on the kinds of things measured
by the CLA.
"We found that students in traditional liberal-arts
fields performed and improved more over time on the CLA," says Mr. Arum, a
professor at New York University. "In other fields, in education, business,
and social work, they didn't do so well. Some of that gap we can trace back
to time spent studying. That doesn't mean that students in education and
business aren't acquiring some very valuable skills. But at the same time,
the communication and reasoning skills measured by the CLA really are
important to everyone."
Dueling Purposes For more than a century, scholars
have had grand visions of building national tests for measuring
college-level learning. Mr. Shavelson, of Stanford, sketches several of
those efforts in his book, including a 1930s experiment that tested
thousands of students at colleges throughout Pennsylvania. (Sample question:
"Of Corneille's plays, 1. Polyeucte, 2. Horace, 3. Cinna, 4. Le Cid shows
least the influence of classical restraint.")
Mr. Shavelson believes the CLA's essays and
"performance tasks" offer an unusually sophisticated way of measuring what
colleges do, without relying too heavily on factual knowledge from any one
academic field. But in his book he also notes the tension between the two
basic uses of nationally normed tests: Sometimes they're used for internal
improvements, and sometimes they're used as benchmarks for external
comparisons. Those two uses don't always sit easily together. Politicians
and consumers want easily interpretable scores, while colleges need subtler
and more detailed data to make internal improvements.
Can the CLA fill both of those roles? That is the
experiment that will play out as more colleges unveil their scores.
Teaching to the Test Somewhat
"An Assessment Test Inspires Tools for Teaching," by David Glenn.
Chronicle of Higher Education, September 19, 2010 ---
http://chronicle.com/article/An-Assessment-Test-Inspires/124537/
"Oregon Trains Educators to Improve Learning for All Students," by
Tanya Roscorla, Converge Magazine, January 6, 2012 ---
http://www.convergemag.com/curriculum/Oregon-DATA-Year5.html?elq=1e13f85f2dc34e84b8b1397c797c2f58
For years, Oregon school districts have collected
student test data. In field assessments, the Oregon Education Department
found that 125 different assessments existed in the state to track student
progress.
But the data sat in warehouses, unused or misused.
Teachers and administrators didn't know how to easily find, analyze and use
student assessment results to inform instruction, said Mickey Garrison, data
literacy director for the Oregon Department of Education.
Five years ago, the department started the Oregon
Direct Access to Achievement Project with a $4.7 million federal grant to
improve student learning. This week, the project is publishing its Year 5
report.
Through the project, Oregon now has an adaptable
data framework and a network for districts that connects virtual teams of
administrators and teachers around the state. The framework has also helped
the state mesh the Common Core State Standards with its own.
"Moving ideas from paper into practice is not
something that I'm gonna say we in education have necessarily done a good
job of in the past, but the model that we created for data definitely goes
deep into implementation, and that's essential," Garrison said.
Continued in article
The
problem is that our students choose very bland, low nourishment diets in our
modern day smorgasbord curricula. Their concern is with their grade averages
rather than their education. And why not? Grades for students and turf for
faculty have become the keys to the kingdom!
Bob Jensen
"Are Undergraduates Actually Learning Anything?" by Richard Arum and
Josipa Roksa. Chronicle of Higher Education, January 18, 2011 ---
http://chronicle.com/article/Are-Undergraduates-Actually/125979/
Drawing on survey responses, transcript data, and
results from the Collegiate Learning Assessment (a standardized test taken
by students in their first semester and at the end of their second year),
Richard Arum and Josipa Roksa concluded that a significant percentage of
undergraduates are failing to develop the broad-based skills and knowledge
they should be expected to master. Here is an excerpt from Academically
Adrift: Limited Learning on College Campuses (University of Chicago Press),
their new book based on those findings.
Continued in article
Our Compassless Colleges: What are students really not learning?
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Berkowitz
What questions might classroom teachers ask of their students,
the answers to which would allow a strong inference that the students
"understood"?
"The Assessment of “Understanding,” by Lloyd Bond, Carnegie Foundation for
Advancement in Teaching ---
Click Here
Study to remember and you will forget.
Study to understand and you will remember.
—Anonymous
I once sat on the dissertation
committee of a graduate student in mathematics education who had examined
whether advanced graduate students in math and science education could
explain the logic underlying a popular procedure for extracting square roots
by hand. Few could explain why the procedure worked. Intrigued by the
results, she decided to investigate whether they could explain the logic
underlying long division. To her surprise, most in her sample could not. All
of the students were adept at division, but few understood why the procedure
worked.
In a series of studies at Johns Hopkins University,
researchers found that first year physics students could unerringly solve
fairly sophisticated problems in classical physics involving moving bodies,
but many did not understand the implications of their answers for the
behavior of objects in the real world. For example, many could not draw the
proper trajectories of objects cut from a swinging pendulum that their
equations implied.
What then does it mean to “understand” something—a
concept, a scientific principle, an extended rhetorical argument, a
procedure or algorithm? What questions might classroom teachers ask of their
students, the answers to which would allow a strong inference that the
students “understood”? Every educator from kindergarten through graduate and
professional school must grapple almost daily with this fundamental
question. Do my students really “get it”? Do they genuinely understand the
principle I was trying to get across at a level deeper than mere
regurgitation? Rather than confront the problem head on, some teachers,
perhaps in frustration, sidestep it. Rather then assign projects or
construct examinations that probe students’ deep understanding, they require
only that students apply the learned procedures to problems highly similar
to those discussed in class. Other teachers with the inclination, time and
wherewithal often resort to essay tests that invite their students to probe
more deeply, but as often as not their students decline the invitation and
stay on the surface.
I have thought about issues surrounding the
measurement of understanding on and off for years, but have not
systematically followed the literature on the topic. On a lark, I conducted
three separate Google searches and obtained the following results:
- “nature of understanding” 41,600 hits
- “measurement of understanding” 66,000 hits
- “assessment of understanding” 34,000 hits
Even with the addition of “classroom” to the
search, the number of hits exceeded 9,000 for each search. The listings
covered the spectrum—from suggestions to elementary school teachers on how
to detect “bugs” in children’s understanding of addition and subtraction, to
discussions of laboratory studies of brain activity during problem solving,
to abstruse philosophical discussions in hermeneutics and epistemology.
Clearly, this approach was taking me everywhere, which is to say, nowhere.
Fully aware that I am ignoring much that has been
learned, I decided instead to draw upon personal experience—some 30 years in
the classroom—to come up with a list of criteria that classroom teachers
might use to assess understanding. The list is undoubtedly incomplete, but
it is my hope that it will encourage teachers to not only think more
carefully about how understanding might be assessed, but also—and perhaps
more importantly—encourage them to think more creatively about the kinds of
activities they assign their classes. These activities should stimulate
students to study for understanding, rather than for mere regurgitation at
test time.
The student who understands a principle, rule,
procedure or concept should be able to do the following tasks (these are
presented in no particular order and their actual difficulties are an
empirical question):
Construct problems that illustrate the
concept, principle, rule or procedure in question.
As the two anecdotes above illustrate, students may know how to use a
procedure or solve specific textbook problems in a domain, but may still not
fully understand the principle involved. A more stringent test of
understanding would be that they can construct problems themselves that
illustrate the principle. In addition to revealing much to instructors about
the nature of students’ understanding, problem construction by students can
be a powerful learning experience in its own right, for it requires the
student to think carefully about such things as problem constraints and data
sufficiency.
Identify and, if possible, correct a
flawed application of a principle or procedure.
This is basically a check on conceptual and procedural knowledge. If a
student truly understands a concept, principle or procedure, she should be
able to recognize when it is faithfully and properly applied and when it is
not. In the latter case, she should be able to explain and correct the
misapplication.
Distinguish between instances and
non-instances of a principle; or stated somewhat differently, recognize and
explain “problem isomorphs,” that is, problems that differ in their context
or surface features, but are illustrations of the same underlying principle.
In a famous and highly cited study by Michelene Chi and her colleagues at
the Learning Research and Development Center, novice physics students and
professors of physics were each presented with problems typically found in
college physics texts and asked to sort or categorized them into groups that
“go together” in some sense. They were then asked to explain the basis for
their categorization. The basic finding (since replicated in many different
disciplines) was that the novice physics students tended to sort problems on
the basis of their surface features (e.g., pulley problems, work problems),
whereas the experts tended to sort problems on the basis of their “deep
structure,” the underlying physical laws that they illustrated (e.g.,
Newton’s third law of motion, the second law of thermodynamics). This
profoundly revealing finding is usually discussed in the context of
expert-novice comparisons and in studies of how proficiency develops, but it
is also a powerful illustration of deep understanding.
Explain a principle or concept to a
naïve audience.
One of the most difficult questions on an examination I took in graduate
school was the following: “How would you explain factor analysis to your
mother?” That I remember this question over 30 years later is strong
testimony to the effect it had on me. I struggled mightily with it. But the
question forced me to think about the underlying meaning of factor analysis
in ways that had not occurred to me before.
Mathematics educator and researcher, Liping Ma, in
her classic exposition Knowing and Teaching Elementary Mathematics
(Lawrence Erlbaum, 1999), describes the difficulty some fifth and sixth
grade teachers in the United States encounter in explaining fundamental
mathematical concepts to their charges. Many of the teachers in her sample,
for example, confused division by 1/2 with division by two. The teachers
could see on a verbal level that the two were different but they could
neither explain the difference nor the numerical implications of that
difference. It follows that they could not devise simple story problems and
other exercises for fifth and sixth graders that would demonstrate the
difference.
To be sure, students may well understand a
principle, procedure or concept without being able to do all of the above.
But a student who can do none of the above almost certainly does not
understand, and students who can perform all of the above tasks flawlessly
almost certainly do understand.
Continued in article
Jensen Comment
This is a huge problem in accounting education, because so many of us teach "how
to" procedures, often very complex procedures, without really knowing whether
our students truly understand the implications of what they are doing for
decision makers who use accounting information, for fraud detection, for fraud
prevention, etc. For example, when teaching rules for asset capitalization
versus expensing, it might help students better understand if they
simultaneously learned about how and why Worldcom understated earnings by over a
billion dollars by capitalizing expenditures that should have been expensed ---
http://faculty.trinity.edu/rjensen/FraudEnron.htm#WorldCom
Also see
http://faculty.trinity.edu/rjensen/265wp.htm
Education is an admirable thing, but it is well to
remember from time to time that nothing that is worth learning can be taught.
Oscar Wilde
"The Objective of Education is Learning, Not Teaching (audio version
available)," University of Pennsylvania's Knowledge@Wharton, August 20, 2008
---
http://knowledge.wharton.upenn.edu/article.cfm;jsessionid=9a30b5674a8d333e4d18?articleid=2032
In their book, Turning Learning Right Side
Up: Putting Education Back on Track, authors Russell L. Ackoff and
Daniel Greenberg point out that today's education system is seriously flawed
-- it focuses on teaching rather than learning. "Why should children -- or
adults -- be asked to do something computers and related equipment can do
much better than they can?" the authors ask in the following excerpt from
the book. "Why doesn't education focus on what humans can do better than the
machines and instruments they create?"
"Education is an admirable thing, but it is well to
remember from time to time that nothing that is worth learning can be
taught."
-- Oscar Wilde
Traditional education focuses on teaching, not
learning. It incorrectly assumes that for every ounce of teaching there is
an ounce of learning by those who are taught. However, most of what we learn
before, during, and after attending schools is learned without its being
taught to us. A child learns such fundamental things as how to walk, talk,
eat, dress, and so on without being taught these things. Adults learn most
of what they use at work or at leisure while at work or leisure. Most of
what is taught in classroom settings is forgotten, and much or what is
remembered is irrelevant.
In most schools, memorization is mistaken for
learning. Most of what is remembered is remembered only for a short time,
but then is quickly forgotten. (How many remember how to take a square root
or ever have a need to?) Furthermore, even young children are aware of the
fact that most of what is expected of them in school can better be done by
computers, recording machines, cameras, and so on. They are treated as poor
surrogates for such machines and instruments. Why should children -- or
adults, for that matter -- be asked to do something computers and related
equipment can do much better than they can? Why doesn't education focus on
what humans can do better than the machines and instruments they create?
When those who have taught others are asked who in
the classes learned most, virtually all of them say, "The teacher." It is
apparent to those who have taught that teaching is a better way to learn
than being taught. Teaching enables the teacher to discover what one thinks
about the subject being taught. Schools are upside down: Students should be
teaching and faculty learning.
After lecturing to undergraduates at a major
university, I was accosted by a student who had attended the lecture. After
some complimentary remarks, he asked, "How long ago did you teach your first
class?"
I responded, "In September of 1941."
"Wow!" The student said. "You mean to say you have
been teaching for more than 60 years?"
"Yes."
"When did you last teach a course in a subject that
existed when you were a student?"
This difficult question required some thought.
After a pause, I said, "September of 1951."
"Wow! You mean to say that everything you have
taught in more than 50 years was not taught to you; you had to
learn on your own?"
"Right."
"You must be a pretty good learner."
I modestly agreed.
The student then said, "What a shame you're not
that good a teacher."
The student had it right; what most faculty members
are good at, if anything, is learning rather than teaching. Recall that in
the one-room schoolhouse, students taught students. The teacher served as a
guide and a resource but not as one who force-fed content into students'
minds.
Ways of Learning
There are many different ways of learning; teaching
is only one of them. We learn a great deal on our own, in independent study
or play. We learn a great deal interacting with others informally -- sharing
what we are learning with others and vice versa. We learn a great deal by
doing, through trial and error. Long before there were schools as we know
them, there was apprenticeship -- learning how to do something by trying it
under the guidance of one who knows how. For example, one can learn more
architecture by having to design and build one's own house than by taking
any number of courses on the subject. When physicians are asked whether they
leaned more in classes or during their internship, without exception they
answer, "Internship."
In the educational process, students should be
offered a wide variety of ways to learn, among which they could choose or
with which they could experiment. They do not have to learn different things
the same way. They should learn at a very early stage of "schooling" that
learning how to learn is largely their responsibility -- with the help they
seek but that is not imposed on them.
The objective of education is learning, not
teaching.
There are two ways that teaching is a powerful tool
of learning. Let's abandon for the moment the loaded word teaching, which is
unfortunately all too closely linked to the notion of "talking at" or
"lecturing," and use instead the rather awkward phrase explaining something
to someone else who wants to find out about it. One aspect of explaining
something is getting yourself up to snuff on whatever it is that you are
trying to explain. I can't very well explain to you how Newton accounted for
planetary motion if I haven't boned up on my Newtonian mechanics first. This
is a problem we all face all the time, when we are expected to explain
something. (Wife asks, "How do we get to Valley Forge from home?" And
husband, who does not want to admit he has no idea at all, excuses himself
to go to the bathroom; he quickly Googles Mapquest to find out.) This is one
sense in which the one who explains learns the most, because the person to
whom the explanation is made can afford to forget the explanation promptly
in most cases; but the explainers will find it sticking in their minds a lot
longer, because they struggled to gain an understanding in the first place
in a form clear enough to explain.
The second aspect of explaining something that
leaves the explainer more enriched, and with a much deeper understanding of
the subject, is this: To satisfy the person being addressed, to the point
where that person can nod his head and say, "Ah, yes, now I understand!"
explainers must not only get the matter to fit comfortably into their own
worldview, into their own personal frame of reference for understanding the
world around them, they also have to figure out how to link their frame of
reference to the worldview of the person receiving the explanation, so that
the explanation can make sense to that person, too. This involves an intense
effort on the part of the explainer to get into the other person's mind, so
to speak, and that exercise is at the heart of learning in general. For, by
practicing repeatedly how to create links between my mind and another's, I
am reaching the very core of the art of learning from the ambient culture.
Without that skill, I can only learn from direct experience; with that
skill, I can learn from the experience of the whole world. Thus, whenever I
struggle to explain something to someone else, and succeed in doing so, I am
advancing my ability to learn from others, too.
Learning through Explanation
This aspect of learning through explanation has
been overlooked by most commentators. And that is a shame, because both
aspects of learning are what makes the age mixing that takes place in the
world at large such a valuable educational tool. Younger kids are always
seeking answers from older kids -- sometimes just slightly older kids (the
seven-year old tapping the presumed life wisdom of the
so-much-more-experienced nine year old), often much older kids. The older
kids love it, and their abilities are exercised mightily in these
interactions. They have to figure out what it is that they understand about
the question being raised, and they have to figure out how to make their
understanding comprehensible to the younger kids. The same process occurs
over and over again in the world at large; this is why it is so important to
keep communities multi-aged, and why it is so destructive to learning, and
to the development of culture in general, to segregate certain ages
(children, old people) from others.
Continued in article
Bob Jensen's threads on assessment, learning, and technology in education
are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
In particular note the document on assessment ---
http://faculty.trinity.edu/rjensen/assess.htm
June 18, 2006
message from Bob Kennelly
[bob_kennelly@YAHOO.COM]
I am a data analyst with the Federal Government,
recently assigned a project to integrate our accounting codes with XBRL
accounting codes, primarily for the quarterly reporting of banking
financial information.
For the past few weeks, i've been searching the
WEB looking for educational materials that will help us map, rollup and
orr olldown the data that we recieve from the banks that we regulate, to
the more generic XBRL accounting codes.
Basically, i'm hoping to provide my team members
with the tools to help them make more informed decisions on how to
classify accounting codes and capture their findings for further review
and discussion.
To my suprise there isn't the wealth of accounting
information that i thought there would be on the WEB, but i am very
relieved to have found Bob Jensen's site and in particular an article
which refers to the kind of information gathering
approaches that i'm hoping to discover!
Here is the brief on that article:
"Using Hypertext in Instructional Material: Helping Students Link
Accounting Concept Knowledge to Case Applications," by Dickie Crandall
and Fred Phillips, Issues in Accounting Education, May 2002, pp. 163-184
---
We studied whether instructional material that
connects accounting concept discussions with sample case applications
through hypertext links would enable students to better understand how
concepts are to be applied to practical case situations.
Results from a laboratory experiment indicated
that students who learned from such hypertext-enriched instructional
material were better able to apply concepts to new accounting cases than
those who learned from instructional material that contained identical
content but lacked the concept-case application hyperlinks.
Results also indicated that the learning benefits
of concept-case application hyperlinks in instructional material were
greater when the hyperlinks were self-generated by the students rather
than inherited from instructors, but only when students had generated
appropriate links.
Could anyone be so kind as to please suggest other
references, articles or tools that will help us better understand and
classify the broad range of accounting terminologies and methodologies
please?
Thanks very much!
Bob Kennelly
OFHEO
June 19, 2006 reply from Bob Jensen
Hi Bob,
You may find the following documents of related interest:
"Internet Financial Reporting: The Effects of Hyperlinks and Irrelevant
Information on Investor Judgments," by Andrea S. Kelton (Ph.D. Dissertation
at the University of Tennessee) ---
http://www.mgt.ncsu.edu/pdfs/accounting/kelton_dissertation_1-19-06.pdf
Extendible Adaptive Hypermedia Courseware: Integrating Different Courses
and Web Material
Lecture Notes in Computer Science, Publisher: Springer Berlin /
Heidelberg ISSN: 0302-9743 Subject: Computer Science Volume 1892 / 2000
Title: Adaptive Hypermedia and Adaptive Web-Based Systems: International
Conference, AH 2000, Trento, Italy, August 2000. Proceedings Editors: P.
Brusilovsky, O. Stock, C. Strapparava (Eds.) ---
Click Here
"Concept, Knowledge, and Thought," G. C. Oden, Annual Review of
Psychology Vol. 38: 203-227 (Volume publication date January 1987) ---
Click Here
"A Framework for Organization and Representation of Concept Knowledge in
Autonomous Agents," by Paul Davidsson, Department of Computer Science,
University of Lund, Box 118, S–221 00 Lund, Sweden email:
Paul.Davidsson@dna.lth.se
"Active concept learning for image retrieval in dynamic databases," by
Dong, A. Bhanu, B. Center for Res. in Intelligent Syst., California Univ.,
Riverside, CA, USA; This paper appears in: Computer Vision, 2003.
Proceedings. Ninth IEEE International Conference on Publication Date: 13-16
Oct. 2003 On page(s): 90- 95 vol.1 ISSN: ISBN: 0-7695-1950-4 ---
Click Here
"Types and qualities of knowledge," by Ton de Jong, Monica G.M.
Ferguson-Hessler, Educational Psychologist 1996, Vol. 31, No. 2,
Pages 105-113 ---
Click Here
Also note
http://faculty.trinity.edu/rjensen/assess.htm#DownfallOfLecturing
Hope this helps
Bob Jensen
Assessing-to-Learn Physics: Project Website ---
http://a2l.physics.umass.edu/
Bob Jensen's threads on science and medicine tutorials are at
http://faculty.trinity.edu/rjensen/Bookbob2.htm#Science
Tips on Preparing Multiple Choice Examinations
Some great tips on preparing multiple choice examinations
"Multiple Choice Exam Theory (Just In Time For The New Term)," by
Jonathan Sterne, Chronicle of Higher Education, January 10, 2013 ---
http://chronicle.com/blogs/profhacker/multiple-choice-exam-theory/45275?cid=wc&utm_source=wc&utm_medium=en
[This is a guest post by Jonathan
Sterne, an associate professor in the
Department of Art
History and Communication Studies at
McGill University.
His latest books are
MP3: The Meaning of a Format (Duke University Press) and
The Sound Studies Reader (Routledge). Find him online at
http://sterneworks.org
and follow him on Twitter
@jonathansterne.--@JBJ]
Every summer, before
I assemble my fall courses, I read a book on pedagogy. Last summer’s
choice is
Cathy Davidson’s
Now You See It (except I read it in the
spring). Those who are familiar with critiques of mainstream educational
practice will find many familiar arguments, but Now You See It
crucially connects them with US educational policy. The book also
challenges teachers who did not grow up online to think about what
difference it makes that their students did. In particular, Davidson
skewers pieties about attention, mastery, testing and evaluation.
The one part of
the book I couldn’t make my peace with was her critique of multiple
choice testing. I agree in principle with everything she says, but what
can you do in large lecture situations, where many of the small class
principles—like the ones she put into practice for
This Is Your Brain on the Internet—won’t work
simply because of the scale of the operation?
When I asked her
about it, we talked about multiple choice approaches that might work.
Clickers are currently popular in one corner of pedagogical theory for
large lectures. Like many schools,
McGill
promotes them as a kind of participation
(which is roughly at the level of voting on American Idol –
except as Henry Jenkins shows, there’s a lot more affect invested
there). I dislike clickers because they eliminate even more spontaneity
from the humanities classroom
than slideware already does. I prefer
in-class exercises built around techniques like
think-write-pair-share.
Multiple-Choice
Testing for Comprehension, Not Recognition
I’ve got another system
I want to share here, which is admittedly imperfect. Indeed, I brought
it up because I was hoping Cathy knew a better solution for big classes.
She didn’t, so I’m posting it here because it’s the best thing I
currently know of.
It’s based on
testing theory I read many years ago, and it seems to work in my
large-lecture introduction to Communication Studies course. It is a
multiple choice system that tests for comprehension, rather than
recognition. As
Derek Bruff explained in a 2010 ProfHacker post, multiple-choice works
best when it operates at the conceptual level, rather than at the level
of regurgitating facts. This works perfectly
for me, since Intro to Communication Studies at McGill is largely
concept-driven.
A couple caveats are in
order here: 1) students generally don’t like it. It looks like other
multiple choice tests but it’s not, so skills that were well developed
in years of standardized testing are rendered irrelevant. 2) multiple
choice is only one axis of evaluation for the course, and as with
Bruff’s final, multiple-choice makes up only part of the exam, with the
other part being free-written short answers. Students must write and
synthesize, and they are subject to pop quizzes, which they also dislike
(except for a small subset that realizes a side-effect is they keep up
with readings). On the syllabus, I am completely clear about which
evaluation methods are coercive (those I use to make them keep up with
the reading and material) and which are creative (where they must
analyze, synthesize and make ideas their own).
So, here’s my multiple
choice final exam formula.
Step 1:
Make it semi-open book. Each student is allowed to bring in a single
sheet of 8.5″ x 11” paper, double sided, single-layered (don’t ask). On
that sheet, they can write anything they want, so long as it’s in their
own handwriting. They must submit the sheet with the exam.
The advantage of this
method is it allows students to write down anything they have trouble
memorizing, but it forces them to study and synthesize before they get
to the moment of the test. Even if they copy someone else, they still
have to expend all that energy writing down the information. And most
students turn in very original, very intricate study guides.
Step 2:
Eliminate recognition as a factor in the test.
Most multiple choice
questions rely on recognition as the path to the right answer. You get a
question stem, and then four or five answers, one of which will be
right. Often, the right answer is something the student will recognize
from the reading, while the wrong answers aren’t.
But recognition isn’t
the kind of thinking we want to test for. We want to test if the student
understands the reading.
The answer to this
problem is simple: spend more time writing the wrong answers.
Pretty much all my
multiple choice exam questions take this form:
Question stem.
–> Right answer
–> True statement from the same reading or a related reading, but
that does not correctly answer the question
–> Argument or position author rehearsed and dismissed; or that
appears in another reading that contradicts the right answer.
From here, you’re
basically set, though I often add a 4th option that is “the common
sense” answer (since people bring a lot of preconceptions to media
studies), or I take the opportunity to crack a joke.
Step 3:
Give the students practice questions, and explain the system to them. I
hide nothing. I tell them how I write the questions, why I write them
the way I do, and what I expect of them. I even have them talk about
what to write on their sheets of paper. I use my university’s online
courseware,
which as Jason Jones explained in a 2010 ProfHacker post, takes the
practice quiz out of class time, and lets students have multiple
cracks at it as they get ready for the exam.
A few other
guidelines:
- Answers should be
as short as possible; most of the detail should be in the question
stem
- Answers should be
of roughly the same length
- I never use “all of
the above” or “none of the above”
- Since we are
testing on comprehension of arguments, I always attribute positions
to an author (“According to Stuart Hall”), so it is not a question
about reality or what the student thinks, but what the student
understands authors to mean.
-
Exception: I will ask
categorical questions, ie, “According to Terranova, which of the
following 4 items would not be an example of ‘free labour’?”
Step 4
(optional): For the first time in 2012, I had students try to write
questions themselves. Over the course of about 10 weeks, I had groups of
18 students write up and post questions on the discussion board (that
follow the rules above) that pertained to readings or lectures from
their assigned week. A large number of them were pretty good, so I
edited them and added them to my question bank for the final exam. So
for fall 2012, my COMS 210 students wrote about half the questions they
were likely to encounter on the final. If they were exceptionally lucky,
their own question might wind up on their own exam (we used 4 different
forms for the final).
Here are links
to my
syllabus and to a copy of the
write your own multiple choice assignment
(with the names removed).
Caveats
- This is an
imperfect system, but it’s the best I’ve found that combines an
economy of labor, vigorous testing, analytical thinking (rather than
recognition) and expansiveness—the students need to engage with all
of the readings. It is certainly not, as Cathy says, a “boss task” –
that’s the term paper.
- McGill
undergraduates are generally very strong students. This format, or
the optional assignment, may be less appropriate for undergrad
populations who don’t arrive at university “already very good at
school.”
- The optional
assignment was definitely more work than just writing new questions
myself. And not all the students will appreciate it (or that
fact–though I only got one complaint out of 187 students). It did
seem to reduce test anxiety among the students I talked with,
though, which is always a good thing.
I think a lot
about large-lecture pedagogy and I’d be delighted to hear from other
profs—in any university field—who teach big classes and who find ways to
nurture student learning and intense evaluation in an environment
structured by limited resources and large numbers.
Continued in article
A Defense of the Multiple-Choice Exam ---
http://www.chronicle.com/article/A-Defense-of-the/238098?cid=at&utm_source=at&utm_medium=en&elqTrackId=6c34011386bb4157bf32871f93fc6070&elq=58de49d36d48489c80569a3b1345dd98&elqaid=11172&elqat=1&elqCampaignId=4303
Jensen Comment
Assume that the test banks for textbooks have been compromised. You might be
able to confuse your students by using a test bank of a competitor's textbook,
but eventually students will catch on to what you are doing. Also test banks
seldom have good multiple choice exam questions except when the questions have
been adapted from CPA, CMA, or other certification examinations. But such
adaptations increase the likelihood that students have access to archives of
such questions.
Another trick is to slightly reword the questions so as to change the
answers. This, however, may become harder than writing your own questions from
scratch.
Also assume that the examinations, especially essay and case questions, you
gave in previous terms are in student archives such as fraternity files.
Since students are going to face multiple choice examinations on future GRE,
GMAT, LSAT, CPA, CMA, and other examinations you can do
them a favor by devoting time in a course teaching them how to take multiple
choice examinations.
Enter the phrase "How to take a multiple choice" at
http://www.bing.com/
Just after the Ice Age when I prepared to take the CPA examination there
where no CPA coaching (vcr machines and computers had not yet been invented)
materials like you can buy today. I mostly studied for the CPA examination by
concentrating as best I could on former CPA examinations (that were available in
hard copy in those days). By the say in addition to multiple choice questions
there were essay questions and problems on CPA examinations even in those days.
My lowest score was in the auditing part of the examination. I would never have
passed that part if the grader and not given me credit for my essay answer that
I crossed out. In those days you could take the CPA examination as a senior in
college before you graduated. What a great feeling to graduate with that monkey
off your back.
Onsite Versus Online Differences for Faculty
"U. of Phoenix Reports on
Its Students' Academic Achievement," by Goldie Blumenstyk, Chronicle of
Higher Education, June 5, 2008 ---
http://chronicle.com/daily/2008/06/3115n.htm?utm_source=at&utm_medium=en
The University of Phoenix is often derided by
traditional academics for caring more about its bottom line than about
academic quality, and every year, the annual report issued by its parent
company focuses more on profits than student performance.
The institution that has become the largest private
university in North America is releasing its first "Annual Academic Report,"
which it will make available on its
Web site
today. The university's leaders say the
findings show that its educational model is effective in helping students
succeed in college, especially those who are underprepared.
Freshmen at the University of Phoenix enter with
reading, writing, and mathematical skills that are, on average, below those
of other college students, but according to data from standardized tests,
Phoenix students appear to improve in those skills at a greater rate than do
students at other colleges.
And in a comparison of students who enter college
with "risk factors" that often contribute to their dropping out, Phoenix's
rates of completion for a bachelor's degree were substantially higher than
for institutions over all.
William J. Pepicello, president of the
330,000-student university, said those and other findings shared in advance
with The Chronicle show that the 32-year-old, open-access institution
is fulfilling its goals.
"This ties into our social mission for our
university," said Mr. Pepicello, in an interview at the company's
headquarters here. "We take these students and we do give them a significant
increase in skills."
Phoenix for years has been extensively measuring
and monitoring student progress for internal purposes, using the data to
change the content and design of its courses or to reshape its approach to
remedial education.
It decided to develop and publish this
report—distinct from the financial reports that its parent company, the
$2.6-billion Apollo Group Inc., regularly provides—as "a good-faith attempt
on our part" to show the university's commitment to growing public demand
for more accountability by institutions of higher education, said Mr.
Pepicello.
He and other university leaders fully expect some
challenges to the findings, but they say the institution, by publishing the
report, is showing its willingness to confront scrutiny of its educational
record from within academe. "It lets us, in a public forum, talk to our
colleagues about what we do and how well we do it," said Mr. Pepicello.
The introduction this academic year of a test that
could be administered to both campus-based and distance-education
students—the Measure of Academic Proficiency and Progress exam by the
Educational Testing Service—also made this kind of reporting possible, he
said. Nearly two-thirds of Phoenix students attend online.
Patrick M. Callan, president of the National Center
for Public Policy and Higher Education, said that although he had not yet
seen Phoenix's data, its decision to publish such a report was "a very
positive development."
He has urged colleges to be open in their reporting
on themselves. Even if the university has chosen to release data that put it
in the best light, as others often do, Mr. Callan said the report will be a
significant piece of the national debate over what value an institution can
add to a student.
"For higher education, it is a positive and useful
and constructive approach," Mr. Callan said. Publication of the report, he
added, was in line with other efforts by the university "to be part of the
discussion on the outcomes of higher education." Those efforts include the
university's recent creation of a research center on adult learners (for
which Mr. Callan is an unpaid adviser).
A Mixed Report Card
In the report, some of those outcomes look better
than others.
"It certainly is not perfect," said Mr. Pepicello
of some of the test scores. "It is where we are."
In its report, Phoenix shows the results from its
1,966 students who took the MAPP test this year, compared with the national
sample of more than 376,000 students from about 300 institutions.
The results show that in reading, critical
thinking, and writing, its freshmen scored below those of the population
over all, but the difference between those scores and those of its seniors
was greater than for the population at large. The difference was more marked
in mathematics, although the university's freshmen and seniors' scores were
both notably lower than those of the whole test-taking pool.
Bill Wynne, MAPP test product specialist, said that
without knowing more about the makeup of the comparative samples and other
information, he could not characterize the statistical significance of the
gains the university was reporting, except that they were at least as good
as those reported by the national cross section. "The magnitude of the
change is in the eye of the beholder," he said.
Mr. Pepicello said he wished the seniors' scores
were higher, particularly in math, but he considered all of the findings
positive because they indicated that students improve when they attend.
"This doesn't embarrass me," he said. "This is really good information for
us to really improve our institution."
(Phoenix did not track the progress of individual
students, but MAPP officials said the university's pool of freshmen and
seniors taking the test was large enough and random enough to justify its
using different groups of students for comparisons.)
In another test, involving a smaller pool of
students, the Phoenix students' "information literacy" skills for such tasks
as evaluating sources and understanding economic, legal, and social issues
were also comparable to or significantly higher than the mean scores in
several categories. Adam Honea, the provost, said the findings from the
Standardized Assessment of Information Literacy Skills test, developed at
Kent State University, were important to the institution since "information
literacy is a goal of ours."
Continued in article
Bob Jensen's threads on asynchronous learning are at
http://faculty.trinity.edu/rjensen/255wp.htm
Keep in mind that the University of Phoenix has a combination of onsite and
online degree programs.
Bob Jensen's threads on controversies of education
technology and online learning are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Bob Jensen's threads on online training and education
alternatives are at
http://faculty.trinity.edu/rjensen/crossborder.htm
Bob Jensen's threads on higher education controversies
are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
The Chronicle's Goldie Blumenstyk has covered
distance education for more than a decade, and during that time she's written
stories about
the economics of for-profit education,
the ways that online institutions
market themselves, and the demise
of
the 50-percent rule. About the
only thing she hadn't done, it seemed, was to take a course from an online
university. But this spring she finally took the plunge, and now she has
completed a class in government and nonprofit accounting through the University
of Phoenix. She shares tales from the cy ber-classroom -- and her final grade --
in a podcast with Paul Fain, a
Chronicle reporter.
Chronicle of Higher Education, June 11, 2008 (Audio) ---
http://chronicle.com/media/audio/v54/i40/cyber_classroom/
·
All course materials (including textbooks) online; No additional
textbooks to purchase
·
$1,600 fee for the course and materials
·
Woman instructor with respectable academic credentials and
experience in course content
·
Instructor had good communications with students and between
students
·
Total of 14 quite dedicated online students in course, most of
whom were mature with full-time day jobs
·
30% of grade from team projects
·
Many unassigned online helper tutorials that were not fully
utilized by Goldie
·
Goldie earned a 92 (A-)
·
She gave a positive evaluation to the course and would gladly take
other courses if she had the time
·
She considered the course to have a
heavy workload
"The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning," (Not Free),
Chronicle of Higher Education, November 2011 ---
https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc
The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning.
As online learning spreads throughout higher
education, so have calls for quality control and assessment. Accrediting
groups are scrambling to keep up, and Congress and government officials
continue to scrutinize the high student-loan default rates and aggressive
recruiting tactics of some for-profit, mostly online colleges. But the push
for accountability isn't coming just from outside. More colleges are looking
inward, conducting their own self-examinations into what works and what
doesn't.
Also in this year's report:
- Strategies for teaching and doing research
online
- Members of the U.S. military are taking online
courses while serving in Afghanistan
- Community colleges are using online technology
to keep an eye on at-risk students and help them understand their own
learning style
- The push to determine what students learn
online, not just how much time they spend in class
- Presidents' views on e-learning
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on online course and degree programs ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
Soaring Popularity of E-Learning Among Students But Not Faculty
How many U.S. students took at least on online course from a legitimate college
in Fall 2005?
More students are taking online college courses than
ever before, yet the majority of faculty still aren’t warming up to the concept
of e-learning, according to a national survey from the country’s largest
association of organizations and institutions focused on online education . . .
‘We didn’t become faculty to sit in front of a computer screen,’
Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed,
November 10, 2006 ---
http://www.insidehighered.com/news/2006/11/10/online
More students are taking online college courses
than ever before, yet the majority of faculty still aren’t warming up to the
concept of e-learning, according to a national survey from the country’s
largest association of organizations and institutions focused on online
education.
Roughly 3.2 million students took at least one
online course from a degree-granting institution during the fall 2005 term,
the Sloan Consortium said. That’s double the number who reported doing so in
2002, the first year the group collected data, and more than 800,000 above
the 2004 total. While the number of online course participants has increased
each year, the rate of growth slowed from 2003 to 2004.
The report, a joint partnership between the group
and the College Board, defines online courses as those in which 80 percent
of the content is delivered via the Internet.
The Sloan Survey of Online Learning,
“Making the Grade: Online Education in the United States, 2006,”
shows that 62 percent of chief academic officers say
that the learning outcomes in online education are now “as good as or
superior to face-to-face instruction,” and nearly 6 in 10 agree that
e-learning is “critical to the long-term strategy of their institution.”
Both numbers are up from a year ago.
Researchers at the Sloan Consortium, which is
administered through Babson College and Franklin W. Olin College of
Engineering, received responses from officials at more than 2,200 colleges
and universities across the country. (The report makes few references to
for-profit colleges, a force in the online market, in part because of a lack
of survey responses from those institutions.)
Much of the report is hardly surprising. The bulk
of online students are adult or “nontraditional” learners, and more than 70
percent of those surveyed said online education reaches students not served
by face-to-face programs.
What stands out is the number of faculty who still
don’t see e-learning as a valuable tool. Only about one in four academic
leaders said that their faculty members “accept the value and legitimacy of
online education,” the survey shows. That number has remained steady
throughout the four surveys. Private nonprofit colleges were the least
accepting — about one in five faculty members reported seeing value in the
programs.
Elaine Allen, co-author of the report and a Babson
associate professor of statistics and entrepreneurship, said those numbers
are striking.
“As a faculty member, I read that response as, ‘We
didn’t become faculty to sit in front of a computer screen,’ ” Allen said.
“It’s a very hard adjustment. We sat in lectures for an hour when we were
students, but there’s a paradigm shift in how people learn.”
Barbara Macaulay, chief academic officer at UMass
Online, which offers programs through the University of Massachusetts, said
nearly all faculty members teaching the online classes there also teach
face-to-face courses, enabling them to see where an online class could fill
in the gap (for instance, serving a student who is hesitant to speak up in
class).
She said she isn’t surprised to see data
illustrating the growing popularity of online courses with students, because
her program has seen rapid growth in the last year. Roughly 24,000 students
are enrolled in online degree and certificate courses through the university
this fall — a 23 percent increase from a year ago, she said.
“Undergraduates see it as a way to complete their
degrees — it gives them more flexibility,” Macaulay said.
The Sloan report shows that about 80 percent of
students taking online courses are at the undergraduate level. About half
are taking online courses through community colleges and 13 percent through
doctoral and research universities, according to the survey.
Nearly all institutions with total enrollments
exceeding 15,000 students have some online offerings, and about two-thirds
of them have fully online programs, compared with about one in six at the
smallest institutions (those with 1,500 students or fewer), the report
notes. Allen said private nonprofit colleges are often set in enrollment
totals and not looking to expand into the online market.
The report indicates that two-year colleges are particularly willing to be
involved in online learning.
“Our institutions tend to embrace changes a little
more readily and try different pedagogical styles,” said Kent Phillippe, a
senior research associate at the American Association of Community Colleges.
The report cites a few barriers to what it calls the “widespread adoption of
online learning,” chief among them the concern among college officials that
some of their students lack the discipline to succeed in an online setting.
Nearly two-thirds of survey respondents defined that as a barrier.
Allen, the report’s co-author, said she thinks that
issue arises mostly in classes in which work can be turned in at any time
and lectures can be accessed at all hours. “If you are holding class in real
time, there tends to be less attrition,” she said. The report doesn’t
differentiate between the live and non-live online courses, but Allen said
she plans to include that in next year’s edition.
Few survey respondents said acceptance of online
degrees by potential employers was a critical barrier — although liberal
arts college officials were more apt to see it as an issue.
November 10, 2006 reply from John Brozovsky
[jbrozovs@vt.edu]
Hi Bob:
One reason why might be what I have seen. The
in residence accounting students that I talk with take online classes
here because they are EASY and do not take much work. This would be very
popular with students but not generally so with faculty.
John
November 10, 2006 reply from Bob Jensen
Hi John,
Then there is a quality control problem whereever this is a fact. It
would be a travesty if any respected college had two or more categories of
academic standards or faculty assignments.
Variations in academic standards have long been a problem between
part-time versus full-time faculty, although grade inflation can be higher
or lower among part-time faculty. In one instance, it’s the tenure-track
faculty who give higher grades because they're often more worried about
student evaluations. At the opposite extreme it is part-time faculty who
give higher grades for many reasons that we can think of if we think about
it.
One thing that I'm dead certain about is that highly motivated students
tend to do better in online courses ceteris paribus. Reasons are mainly that
time is used more efficiently in getting to class (no wasted time driving or
walking to class), less wasted time getting teammates together on team
projects, and fewer reasons for missing class.
Also online alternatives offer some key advantages for certain types of
handicapped students ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
My opinions on learning advantages of E-Learning were heavily influenced
by the most extensive and respected study of online versus onsite learning
experiments in the SCALE experiments
using full-time resident students at the University of Illinois ---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
In the SCALE experiments cutting across 30 disciplines, it was generally
found that motivated students learned better online then their onsite
counterparts having the same instructors. However, there was no significant
impact on students who got low grades in online versus onsite treatment
groups.
I think the main problem with faculty is that online teaching tends to
burn out instructors more frequently than onsite instructors. This was also
evident in the SCALE experiments. When done correctly, online courses are
more communication intent between instructors and faculty. Also, online
learning takes more preparation time if it is done correctly.
My hero for online learning is still Amy Dunbar who
maintains high standards for everything:
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm
http://faculty.trinity.edu/rjensen/book01q4.htm#Dunbar
Bob Jensen
November 10, 2006 reply from John Brozovsky
[jbrozovs@vt.edu]
Hi Bob:
Also why many times it is not done 'right'. Not
done right they do not get the same education. Students generally do not
complain about getting 'less for their money'. Since we do not do online
classes in department the ones the students are taking are the university
required general education and our students in particular are not unhappy
with being shortchanged in that area as they frequently would have preferred
none anyway.
John
Bob Jensen's threads on open sharing and education technology are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Bob Jensen's threads on online training and education alternatives are at
http://faculty.trinity.edu/rjensen/crossborder.htm
Motivations for Distance Learning ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#Motivations
Bob Jensen's threads on the dark side of online learning and teaching are
at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Question
Why should teaching a course online take twice as much time as teaching it
onsite?
Answer
Introduction to Economics: Experiences of teaching this course online
versus onsite
With a growing number of courses offered online and
degrees offered through the Internet, there is a considerable interest in online
education, particularly as it relates to the quality of online instruction. The
major concerns are centering on the following questions: What will be the new
role for instructors in online education? How will students' learning outcomes
be assured and improved in online learning environment? How will effective
communication and interaction be established with students in the absence of
face-to-face instruction? How will instructors motivate students to learn in the
online learning environment? This paper will examine new challenges and barriers
for online instructors, highlight major themes prevalent in the literature
related to “quality control or assurance” in online education, and provide
practical strategies for instructors to design and deliver effective online
instruction. Recommendations will be made on how to prepare instructors for
quality online instruction.
Yi Yang and Linda F. Cornelious, "Preparing Instructors for Quality
Online Instruction, Working Paper ---
http://www.westga.edu/%7Edistance/ojdla/spring81/yang81.htm
Jensen Comment: The bottom line is that teaching the course online took
twice as much time because "largely from increased student contact and
individualized instruction and not from the use of technology per se."
Online teaching is more likely to result in instructor burnout. These
and other issues are discussed in my "dark side" paper at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
April 1, 2005 message from Carolyn Kotlas
[kotlas@email.unc.edu]
COMPUTERS IN THE CLASSROOM AND OPEN BOOK EXAMS
In "PCs in the Classroom & Open Book Exams" (UBIQUITY, vol. 6, issue 9,
March 15-22, 2005), Evan Golub asks and supplies some answers to questions
regarding open-book/open-note exams. When classroom computer use is allowed
and encouraged, how can instructors secure the open-book exam environment?
How can cheating be minimized when students are allowed Internet access
during open-book exams? Golub's suggested solutions are available online at
http://www.acm.org/ubiquity/views/v6i9_golub.html
Ubiquity is a free, Web-based publication of the
Association for Computing Machinery (ACM), "dedicated to fostering critical
analysis and in-depth commentary on issues relating to the nature,
constitution, structure, science, engineering, technology, practices, and
paradigms of the IT profession." For more information, contact: Ubiquity,
email: ubiquity@acm.org ; Web:
http://www.acm.org/ubiquity/
For more information on the ACM, contact: ACM, One Astor Plaza, 1515
Broadway, New York, NY 10036, USA; tel: 800-342-6626 or 212-626-0500; Web:
http://www.acm.org/
NEW EDUCAUSE E-BOOK ON THE NET GENERATION
EDUCATING THE NET GENERATION, a new EDUCAUSE
e-book of essays edited by Diana G. Oblinger and James L. Oblinger,
"explores the Net Gen and the implications for institutions in areas such as
teaching, service, learning space design, faculty development, and
curriculum." Essays include: "Technology and Learning Expectations of the
Net Generation;" "Using Technology as a Learning Tool, Not Just the Cool New
Thing;" "Curricula Designed to Meet 21st-Century Expectations;" "Faculty
Development for the Net Generation;" and "Net Generation Students and
Libraries." The entire book is available online at no cost at
http://www.educause.edu/educatingthenetgen/
.
EDUCAUSE is a nonprofit association whose mission
is to advance higher education by promoting the intelligent use of
information technology. For more information, contact: Educause, 4772 Walnut
Street, Suite 206, Boulder, CO 80301-2538 USA; tel: 303-449-4430; fax:
303-440-0461; email:
info@educause.edu; Web:
http://www.educause.edu/
See also:
GROWING UP DIGITAL: THE RISE OF THE NET GENERATION
by Don Tapscott McGraw-Hill, 1999; ISBN: 0-07-063361-4
http://www.growingupdigital.com/
EFFECTIVE E-LEARNING DESIGN
"The unpredictability of the student context and
the mediated relationship with the student require careful attention by the
educational designer to details which might otherwise be managed by the
teacher at the time of instruction." In "Elements of Effective e-Learning
Design" (INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTANCE LEARNING,
March 2005) Andrew R. Brown and Bradley D. Voltz cover six elements of
effective design that can help create effective e-learning delivery. Drawing
upon examples from The Le@rning Federation, an initiative of state and
federal governments of Australia and New Zealand, they discuss lesson
planning, instructional design, creative writing, and software
specification. The paper is available online at
http://www.irrodl.org/content/v6.1/brown_voltz.html
International Review of Research in Open and
Distance Learning (IRRODL) [ISSN 1492-3831] is a free, refereed ejournal
published by Athabasca University - Canada's Open University. For more
information, contact Paula Smith, IRRODL Managing Editor; tel: 780-675-6810;
fax: 780-675-672; email:
irrodl@athabascau.ca
; Web:
http://www.irrodl.org/
The Le@rning Federation (TLF) is an "initiative
designed to create online curriculum materials and the necessary
infrastructure to ensure that teachers and students in Australia and New
Zealand can use these materials to widen and enhance their learning
experiences in the classroom." For more information, see
http://www.thelearningfederation.edu.au/
RECOMMENDED READING
"Recommended Reading" lists items that have been
recommended to me or that Infobits readers have found particularly
interesting and/or useful, including books, articles, and websites published
by Infobits subscribers. Send your recommendations to
carolyn_kotlas@unc.ed u for possible
inclusion in this column.
Author Clark Aldrich recommends his new book:
LEARNING BY DOING: A COMPREHENSIVE GUIDE TO
SIMULATIONS, COMPUTER GAMES, AND PEDAGOGY IN E-LEARNING AND OTHER
EDUCATIONAL EXPERIENCES Wiley, April 2005 ISBN: 0-7879-7735-7 hardcover
$60.00 (US)
Description from Wiley website:
"Designed for learning professionals and drawing on
both game creators and instructional designers, Learning by Doing explains
how to select, research, build, sell, deploy, and measure the right type of
educational simulation for the right situation. It covers simple approaches
that use basic or no technology through projects on the scale of computer
games and flight simulators. The book role models content as well, written
accessibly with humor, precision, interactivity, and lots of pictures. Many
will also find it a useful tool to improve communication between themselves
and their customers, employees, sponsors, and colleagues."
The table of contents and some excerpts are
available at
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787977357.html
Aldrich is also author of SIMULATIONS AND THE FUTURE OF LEARNING: AN
INNOVATIVE (AND PERHAPS REVOLUTIONARY) APPROACH TO E-LEARNING. See
http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787969621.html
for more information or to request an evaluation copy of this title.
Also see
Looking at Learning….Again, Part 2 ---
http://www.learner.org/resources/series114.html
Bob Jensen's documents on education technology are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
More on this topic appears in the module below.
"Far From Honorable," by Steve Kolowich, Inside Higher Ed,
October 25, 2011 ---
http://www.insidehighered.com/news/2011/10/25/online-students-might-feel-less-accountable-honor-codes
Much of the urgency around creating a “sense of
community” in online courses springs from a desire to keep online students
from dropping out. But a recent paper suggests that strengthening a sense of
social belonging among online students might help universities fight another
problem: cheating.
In a series of experiments, researchers at Ohio
University found that students in fully online psychology courses who signed
an honor code promising not to cheat broke that pledge at a significantly
higher rate than did students in a “blended” course that took place
primarily in a classroom.
“The more distant students are, the more
disconnected they feel, and the more likely it is that they’ll rationalize
cheating,” Frank M. LoSchiavo, one of the authors, conjectured in an
interview with Inside Higher Ed.
While acknowledging the limitations inherent to a
study with such a narrow sample, and the fact that motivations are
particularly hard to pin down when it comes to cheating, LoSchiavo and Mark
A. Shatz, both psychology professors at Ohio University's Zanesville campus,
said their findings may indicate that meeting face-to-face with peers and
professors confers a stronger sense of accountability among students. “Honor
codes,” LoSchiavo said, “are more effective when there are [strong] social
connections.”
Honor codes are not, of course, the only method of
deterring cheating in online courses. The proliferation of online programs
has given rise to a
cottage industry of
remote proctoring technology, including one product that takes periodic
fingerprint readings while monitoring a student’s test-taking environment
with a 360-degree camera. (A 2010 survey by the Campus Computing Project
suggests that a minority of institutions authenticate the identities of
online students as a rule.)
But LoSchiavo said that he and Shatz were more
interested in finding out whether honor codes held any sway online. If so,
then online instructors might add pledges to their arsenal of anti-cheating
tools, LoSchiavo said. If not, it provides yet an intriguing contribution to
the discussion about student engagement and “perceived social distance” in
the online environment.
They experimented with the effectiveness of honor
codes in three introductory psychology courses at Ohio University. The first
course had 40 students and was completely online. These students, like those
in subsequent trials, were a mix of traditional-age and adult students,
mostly from regional campuses in the Ohio University system. There was no
honor code. Over the course of the term, the students took 14
multiple-choice quizzes with no proctoring of any kind. At the end of the
term, 73 percent of the students admitted to cheating on at least one of
them.
The second trial involved another fully online
introductory course in the same subject. LoSchiavo and Shatz divided the
class evenly into two groups of 42 students, and imposed an honor code --
posted online with the other course materials -- to one group but not the
other. The students “digitally signed the code during the first week of the
term, prior to completing any assignments.” The definition of cheating was
the same as in the first trial: no notes, no textbooks, no Internet, no
family or friends. There was no significant difference in the self-reported
cheating between the two groups.
In a third trial, the professors repeated the
experiment with 165 undergraduates in a “blended” course, where only 20
percent of the course was administered online and 80 percent in a
traditional classroom setting. Again, they split the students into two
groups: one in which they were asked to sign an honor code, and another in
which they were not.
This time, when LoSchiavo and Shatz surveyed the
students at the end of the term, there was a significant difference:
Students who promised not to cheat were about 25 percent less likely to
cheat than were those who made no such promise. Among the students who had
not signed the code, 82 percent admitted to cheating.
LoSchiavo concedes that this study offers no
definitive answers on the question of whether students are more likely to
cheat in fully online courses. Cheating is more often than not a crime of
opportunity, and containing integrity violations probably has much more to
do with designing a system that limits the opportunities to cheat and gives
relatively little weight to those assignments for which cheating is hardest
to police.
“The bottom line is that if there are
opportunities, students will cheat,” he said. “And the more opportunities
they have, the more cheating there will be, and it is incumbent upon
professors to put in a system that, when it’s important, cheating will be
contained.”
Continued in article
Jensen Comment
I think universities like Trinity University that expanded their honor codes to
include student courts are generally happy with the operations of those honor
codes. However, Trinity has only full time students and no distance education
courses.
One thing that I hated giving up was grading control. For most of my teaching
career I gave F grades to students who seriously cheated in my courses. Under
the revised Trinity Honor Code, instructors can no longer control the granting
of F grades for cheating.
When I was a student at Stanford the Honor Code included a pledge to report
cheating of other students. I think most universities have watered down this
aspect of their honor codes because, in this greatly increased era of
litigation, student whistle blowers can be sued big time. Universities may
continue to encourage such whistle blowing, but they no longer make students
sign pledges that on their honor they will be whistleblowers if they do not want
to bear the risk of litigation by students they report.
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
"Nationally Recognized Assessment and Higher Education Study Center
Findings as Resources for Assessment Projects," by Tracey Sutherland,
Accounting Education News, 2007 Winter Issue, pp. 5-7
While nearly all accounting programs are wrestling
with various kinds of assessment initiatives to meet local assessment plans
and/or accreditation needs, most colleges and universities participate in
larger assessment projects whose results may not be shared at the
College/School level. There may be information available on your campus
through campus-level assessment and institutional research that generate
data that could be useful for your accounting program/school assessment
initiatives. Below are examples of three such research projects, and some of
their recent findings about college students.
- The Cooperative Institutional Research Program
(CIRP) The American Freshman: National Norms for 2006
- The 2006 Report of the National Survey of
Student Engagement
- From the National Freshman Attitudes Report
2007
Some things in the The 2006 Report of the National Survey of Student
Engagement especially caught my eye:
Promising Findings from the National Surveyof Student
Engagement
• Student engagement is positively
related to first-year and senior student grades and to persistence
between the first and second year of college.
• Student engagement has
compensatory effects on grades andpersistence of students from
historically underserved backgrounds.
• Compared with campus-basedstudents,
distance education learners reported higher levels ofacademic challenge,
engaged more often in deep learning activities, and reported greater
developmental gains from college.
• Part-time working students
reported grades comparable to other students and also perceived the
campus to be as supportive of their academic and social needs as
theirnon-working peers.
• Four out of five beginning
college students expected that reflective learning activities would be
an important part of their first-year experience.
Disappointing Findings from the
National
Survey of Student Engagement
• Students spend on average only about
13–14 hours a week preparingfor class, far below what faculty members say is
necessary to do well in their classes.
• Students study less during the first
year of college than they expected to at the start of the academic year.
• Women are less likely than men to
interact with faculty members outside of class including doing research with
a faculty member.
• Distance education students are less
involved in active and collaborative learning.
• Adult learners were much lesslikely
to have participated in such enriching educational activities as community
service, foreign language study, a culminating senior experience, research
with faculty,and co-curricular activities.
• Compared with other students,
part-time students who are working had less contact with facultyand
participated less in active and collaborative learning activities and
enriching educational experiences.
Some additional 2006 NSSE findings
• Distance education studentsreported higher levels of
academic challenge, and reported engaging more often in deep learning
activities such as the reflective learning activities. They also reported
participating less in collaborative learning experiences and worked more
hours off campus.
• Women students are more likely to be engaged in foreign
language coursework.
• Male students spent more time engaged in working with
classmates on projects outside of class.
• Almost half (46%) of adult students were working more than
30 hours per week and about three-fourths were caring for dependents. In
contrast, only 3% of traditional age students worked more than 30 hours per
week, and about four fifths spend no time caring for dependents.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Students Reviewing Each Others' Projects
January 30, 2009 message from David Fordham, James Madison University
[fordhadr@JMU.EDU]
I teach an MBA section of "Introduction to
Information Security". One of the course requirements is an Information
Security Policy Manual for a hypothetical company. Students submit their
manuals electronically, with the only identifying information being their
name as the title of the file. I strip off all other identifying information
(Tools-Options, File-Properties, etc.) from the document and change the name
of the file to "Student 1" "Student 2" etc.
Then, I distribute the file to two other students
for blind review.
In reality, each author receives THREE (3) reviews,
because I myself provide a review, in addition to the two students. I do NOT
identify the reviewers, either, so the author gets three reviews, but does
not know which one is mine and which are the other two student reviews. Two
are blind, and one is mine, but all the student gets is "review 1", "review
2", and "review 3". I am NOT always "review 3".
This has proven to be very effective. Each student
gets to actually SEE two other students' work up close and personal and has
to put thought into evaluating it, and in so doing, can compare their peers'
work to their own. Plus, each student then gets three reviews from three
other individuals, making a total of FIVE (5) different perspectives which
to compare with their own.
This "reviewed" submission is the "mid-term"
submission. The students then have the option (all of them take it!) to
revise their manual if they wish for the final submission. The quality of
the final product is day-and-night difference from what I used to get: truly
professional level work. Hence, I'm a believer in the system.
(Plus, I can rage all I want in my review of the
first submission if its really bad, and the student doesn't know it's me!)
Incidentally, part of the course grade is how well
they review their two assigned manuals... I expect good comments,
constructive criticism, useful suggestions, etc. Because the students are
all in the executive MBA program, and because this approach is novel, I
usually get some really good participation and high-quality reviews.
No, it doesn't save me a lot of time, since I still
personally "grade" (e.g., do a review of) each submission. But I'm doing it
to save time, I'm doing it because it gives high value to the student. I
can, however, easily see where peer review would be a fantastic time-saver
when a professor gives lengthy assignments to large numbers of students.
David Fordham
JMU
The inmates are running the asylum
From Duke University: One of the Most Irresponsible Grading Systems in the
World
Her approach? "So, this year, when I teach 'This Is
Your Brain on the Internet,' I'm trying out a new point system. Do all the work,
you get an A. Don't need an A? Don't have time to do all the work? No problem.
You can aim for and earn a B. There will be a chart. You do the assignment
satisfactorily, you get the points. Add up the points, there's your grade.
Clearcut. No guesswork. No second-guessing 'what the prof wants.' No gaming the
system. Clearcut. Student is responsible." That still leaves the question of
determining whether students have done the work. Here again, Davidson plans to
rely on students. "Since I already have structured my seminar (it worked
brilliantly last year) so that two students lead us in every class, they can now
also read all the class blogs (as they used to) and pass judgment on whether
they are satisfactory. Thumbs up, thumbs down," she writes.
Scott Jaschik, "Getting Out of Grading," Inside Higher Education, August
3, 2009
Jensen Comment
No mention of how Professor Davidson investigates and punishes plagiarism and
other easy ways to cheat in this system. My guess is that she leaves it up to
the students to police themselves any way they like. One way to cheat is simply
hire another student to do the assignment. With no examinations in a controlled
setting, who knows who is doing whose work?
It is fairly common for professors use grading inputs when students evaluate
each others' term projects, but this is the first time I ever heard of turning
the entire grading process (with no examinations) over to students in the class.
Read about how David Fordham has students evaluate term projects at
http://faculty.trinity.edu/rjensen/assess.htm#StudentPeerReview
August 4, 2009 reply from David Fordham, James Madison University
[fordhadr@JMU.EDU]
Bob, While I feel the way you do about it, it is
interesting to note that this type of thing isn't new.
In the fall semester of 1973, at the North Campus
of what today is the Florida State College in Jacksonville (formerly FCCJ,
and when I was going there it was called FJC), I enrolled in a
sophomore-level psychology class taught by Dr. Pat Greene. The very first
day, Dr. Greene handed out a list of 30 assignments. Each assignment was
independent study, and consisted of viewing a 15 to 60 minute
video/filmstrip/movie/etc. in the library, or reading a chapter in the
textbook, followed by completion of a 1 to 3 page "worksheet" covering the
major concepts covered in the "lesson".
As I recall, the worksheet was essentially a set of
fill-in-the-blank questions. It was open book, open note, open anything, and
when you completed the worksheet, you put your name on it and dropped it in
Dr. Greene's mailbox in the faculty offices lobby at your convenience.
The first 10 assignments were required in order to
pass the course, but students could pick and choose from the remainder. If
you stopped after the 10 required assignments, you got a D in the class. If
you did 15 assignments, you got a C; 20 a B, and if you completed all 30,
you got an A in the class. Students could pick which lessons to complete
(after the first 10) if they elected not to do all 30.
This was before email, YouTube, and PDF's. Students
worked at their own pace, there was no class meeting whatsoever after that
first day. After the first day of class where I received the syllabus and
assignment sheet, I never attended the classroom again. Dr. Greene
supposedly held office hours during class time for students who wanted to
ask questions, but I never needed it (nor did anyone else I knew of) because
the assignments were so simple and easy, especially since they were open
book, open note, and there was no time limit! There was no deadline, either,
you could take till the end of the semester if you wanted to.
Oh, and no exams, either.
This was also before FERPA. Dr. Greene had a roll
taped to his office door with all students' names on it. It was a manual
spreadsheet, and as you turned in assignments, you got check marks beside
your name in the columns showing which assignments you had "completed". We
never got any of the assignments back, but supposedly if an assignment had
too many errors, the student would get a dash mark instead of a check mark,
indicating the need to do it over again.
Within 2 weeks, I had completed all 30 assignments,
got my A, and never saw Dr. Greene again. I learned at lot about psychology
(everything from Maslow's Hierarchy to Pavlov's slobbering dogs, from the
(now infamous) Hawthorne Effect to the impact of color on emotions), so I
guess the class was a success. But what astounded me was that so many of my
classmates quit after earning the B. The idea of having to do half-again as
much work for an A compared to a B was apparently just too much for most of
my classmates, because when I (out of curiosity) stopped by his office at
the end of the semester, I was blown away by the fact that only a couple of
us had A's, whereby almost everyone else had the B (and a couple had C's,
again to my astonishment). I can't remember if there were any D's or F's.
At the time, I was new to the college environment,
and in my conversations with other faculty members, I discovered that
professors enjoyed something called "academic freedom", and none of my other
professors seemed to have any problem with what Dr. Greene was doing. In
later years, it occurred to me that perhaps we were guinea-pigs for a
psychology study he was doing on motivation. But since he was still using
this method six years later for my younger sister (and using the same
videos, films, and filmstrips!), I have my doubts.
Dr. Greene was a professor for many, many years.
Perhaps he was ahead of his time, with today's camtasia and snag-it and
you-tube recordings... None of his assigned work was his own, it was all
produced by professional producers, with the exception of his worksheets,
which were all the "purple plague" spirit-duplicator handouts.
I've often wondered how much more, if any, I could
have learned if he'd really met with the class and actually tried to teach.
But then again, as I took later psychology classes as part of my management
undergrad (org behavior, supervision, human relations, etc.) I was pleased
with how much I had learned in Dr. Greene's class, so I guess it wasn't a
complete waste of time. Many of my friends who were in his class with me
found the videos and filmstrips a nice break from the dry lectures of some
of our other profs at the time. Plus, we liked the independent-study
convenience. Oh, well...
Bottom line: this type of thing isn't new: 1973 was
35 years ago. Since academic freedom is still around, it doesn't surprise me
that Dr. Greene's teaching (and in this case, his grading) style is still
around too.
David Fordham
James Madison University
Bob Jensen's threads on cheating are at
http://faculty.trinity.edu/rjensen/plagiarism.htm
Online Versus Onsite for Students
August 25, 2009 message from
A lot of the face-to-face students I talk with like
online classes BECAUSE THEY ARE EASY. While it is very possible to have a
good solid online class (as evidenced my several on this listserve) my
perception is that an awful lot of them out there are not. Students can load
up with 21+ hours and work fulltime and still have a good GPA.
John
August 26, 2009 reply from Bob Jensen
Hi John,
I would not say
that out loud to Amy Dunbar or Denny Beresford that they’re easy graders ---
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm
I would not say
that out loud to the graduates of two principles of accounting weed out
courses year after year at Brigham Young University where classes meet on
relatively rare occasion for inspiration about accountancy but not technical
learning ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
Try to tell the
graduates of Stanford University’s ADEPT Masters of Electrical Engineering
program that they had an easier time of it because the entire program is online.
There’s an
interesting article entitled how researchers
misconstrue causality:
Like elaborately plumed birds … we preen and strut and display our t-values.”
That was Edward Leamer’s uncharitable description of his profession in 1983.
“Cause and Effect: Instrumental variable help to isolate causal relationships,
but they can be taken too far,”
The Economist,
August 15-21, 20098 Page 68.
It is often the
case that distance education courses are taught by non-tenured faculty, and
non-tenured faculty may be easier with respect to grading than regular faculty
because they are even more in need of strong teaching evaluations to not lose
their jobs. The problem may have nothing whatsoever to do with online versus
onsite education.
I think it is
very rewarding to look at grading in formal studies using the same
full-time faculty teaching sections of online versus onsite students. By formal
study, I mean using the same instructors, the same materials, and essentially
the same examinations. The major five-year, multimillion dollar study that first
caught my eye was the SCALE experiments on the campus of the University of
Illinois where 30 courses from various disciplines were examined over a five
year experiment.
Yes the SCALE
experiments showed that some students got higher grades online, notably B
students who became A students and C students who became A students. The online
pedagogy tended to have no effect on D and F students
---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
Listen to Dan
Stone’s audio about the SCALE Experiments ---
http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm
But keep in
mind that in the SCALE experiments, the same instructor of a course was grading
both the online and onsite sections of the same course. The reason was not
likely to be that online sections were easier. The SCALE experiments collected a
lot of data pointing to more intense communications with instructors and more
efficient use of student’s time that is often wasted in going to classes.
The students in
the experiment were full time on campus students, such that the confounding
problems of having adult part-time students was not a factor in the SCALE
experiments of online, asynchronous learning.
A
Statement About Why the SCALE Experiments Were Funded
ALN = Asynchronous Learning
We are particularly interested in new outcomes
that may be possible through ALN. Asynchronous computer networks have the
potential to
improve contact with faculty,
perhaps making self-paced learning a realizable goal for some off- and on-campus
students. For example, a motivated student could progress more rapidly toward a
degree. Students who are motivated but find they cannot keep up the pace, may be
able to slow down and take longer to complete a degree, and not just drop out in
frustration. So we are interested in what impact ALN will have on outcomes such
as time-to-degree and student retention. There are many opportunities where ALN
may contribute to another outcome: lowering the cost of education, e.g., by
naturally introducing new values for old measures such as student-faculty
ratios. A different kind of outcome for learners who are juggling work and
family responsibilities, would be to be able to earn a degree or certification
at home. This latter is a special focus for us.
Alfred P. Sloan
Foundation's Program in
Learning Outside the Classroom at
http://w3.scale.uiuc.edu/scale/
Another study
that I love to point to was funded by the Chronicle of Higher Education.
Read about when one of the Chronicle’s senior editors took a
Governmental Accounting Course at the University of
Phoenix during which the instructor of the course had no idea that Goldie
Blumenstyk was
assessing how difficult or how easy the course was for students in general. I
think Goldie’s audio report of her experience is still available from the
Chronicle of Higher Education. Goldie came away from the course exhausted.
"U. of Phoenix Reports on Its Students' Academic
Achievement," by Goldie Blumenstyk, Chronicle of Higher Education,
June 5, 2008 ---
http://chronicle.com/daily/2008/06/3115n.htm?utm_source=at&utm_medium=en
The Chronicle's Goldie Blumenstyk has covered
distance education for more than a decade, and during that time she's written
stories about
the economics of for-profit education, the ways that online institutions
market themselves, and the demise of
the 50-percent rule. About the only thing she hadn't done, it seemed, was to
take a course from an online university. But this spring she finally took the
plunge, and now she has completed a class in government and nonprofit accounting
through the University of Phoenix. She shares tales from the cy ber-classroom --
and her final grade --
in a podcast with Paul Fain, a Chronicle reporter.
Chronicle of Higher Education, June 11, 2008 (Audio) ---
http://chronicle.com/media/audio/v54/i40/cyber_classroom/
·
All course materials (including textbooks) online; No additional
textbooks to purchase
·
$1,600 fee for the course and materials
·
Woman instructor with respectable academic credentials and experience in
course content
·
Instructor had good communications with students and between students
·
Total of 14 quite dedicated online students in course, most of whom were
mature with full-time day jobs
·
30% of grade from team projects
·
Many unassigned online helper tutorials that were not fully utilized by
Goldie
·
Goldie earned a 92 (A-)
·
She gave a positive evaluation to the course and would gladly take other
courses if she had the time
·
She considered the course to have a heavy
workload
The best place to begin searching for research on ALN
learning is at
http://www.sloan-c.org/publications/jaln/index.asp
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
Bob Jensen’s threads on the dark side of online
education and distance education in general can be found at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Bob Jensen's threads on asynchronous learning are at
http://faculty.trinity.edu/rjensen/255wp.htm
Keep in mind that the University of Phoenix has a combination of onsite and
online degree programs.
Bob Jensen's threads on controversies of education
technology and online learning are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Bob Jensen's threads on online training and education
alternatives are at
http://faculty.trinity.edu/rjensen/crossborder.htm
Bob Jensen's threads on higher education controversies
are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"Study: Little Difference in Learning in Online and In-Class Science
Courses," Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses
A
study in Colorado has found little difference in
the learning of students in online or in-person introductory science
courses. The study tracked community college students who took science
courses online and in traditional classes, and who then went on to four-year
universities in the state. Upon transferring, the students in the two groups
performed equally well. Some science faculty members have expressed
skepticism about the ability of online students in science, due to the lack
of group laboratory opportunities, but the programs in Colorado work with
companies to provide home kits so that online students can have a lab
experience.
Jensen Comment
Firstly, note that online courses are not necessarily mass education (MOOC)
styled courses. The student-student and student-faculty interactions can be
greater online than onsite. For example, my daughter's introductory chemistry
class at the University of Texas had over 600 students. On the date of the final
examination he'd never met her and had zero control over her final grade. On the
other hand, her microbiology instructor in a graduate course at the University
of Maine became her husband over 20 years ago.
Another factor is networking. For example, Harvard Business School students
meeting face-to-face in courses bond in life-long networks that may be stronger
than for students who've never established networks via classes, dining halls,
volley ball games, softball games, rowing on the Charles River, etc. There's
more to lerning than is typically tested in competency examinations.
My point is that there are many externalities to both onsite and online
learning. And concluding that there's "little difference in learning" depends
upon what you mean by learning. The SCALE experiments at the University of
Illinois found that students having the same instructor tended to do slightly
better than onsite students. This is partly because there are fewer logistical
time wasters in online learning. The effect becomes larger for off-campus
students where commuting time (as in Mexico City) can take hours going to and
from campus.
http://faculty.trinity.edu/rjensen/255wp.htm
An Online Learning Experiment Overwhelms the University of Southern
California
"An Experiment Takes Off," by Doug Lederman, Inside Higher Ed, October 7, 2009
---
http://www.insidehighered.com/news/2009/10/07/uscmat#
When Karen Symms Gallagher
ran into fellow education deans last year, many of them were "politely
skeptical," the University of Southern California dean says (politely),
about
her institution's experiment to take its master's
program in teaching online.
Many of them seemed to
appreciate Gallagher's argument that the traditional model of teacher
education programs had largely failed to produce the many more top-notch
teachers that California (and so many other states) desperately needed. But
could a high-quality MAT program be delivered online? And through a
partnership with a for-profit entity (2Tor),
no less? Really?
Early results about
the program known as MAT@USC
have greatly pleased Gallagher and USC. One hundred
forty-four students enrolled in the Rossier School of Education program's
first full cohort in May, 50 percent more than anticipated and significantly
larger than the 100 students who started at that time in the traditional
master's in teaching program on the university's Los Angeles campus.
And this month, a new group
of 302 students started in the second of three planned "starts" per year,
meaning that USC has already quadrupled the number of would-be teachers it
is educating this year and, depending on how many students enroll in
January, is on track to increase it a few times more than that.
It will be a while --
years, probably, until outcomes on teacher certification exams are in and
the program's graduates have been successful (or not) in the classroom --
before questions about the program's quality and performance are fully
answered (though officials there point out that the technology platform,
like much online learning software, provides steady insight into how
successfully students are staying on track). But USC officials say that
short of quantitative measures such as those, they believe the online
program is attracting equally qualified students and is providing an
education that is fully equivalent to Rossier's on-ground master's program
-- goals that the institution viewed as essential so as not to "dilute the
brand" of USC's well-regarded program.
"So far, we've beaten the
odds," says Gallagher. "We're growing in scale while continuing to ensure
that we have a really good program."
"Scale" is a big buzzword
in higher education right now, as report after report and new undertaking
after new undertaking -- including the Obama administration's American
Graduation Initiative -- underscore the perceived need for more Americans
with postsecondary credentials. Many institutions -- especially community
colleges and for-profit colleges -- are taking it to heart, expanding their
capacity and enrolling more students. The push is less evident at other
types of colleges and universities, and almost a foreign concept at highly
selective institutions.
That's what is atypical,
if not downright exceptional, about the experiment at USC, which Inside
Higher Ed
explored in concept last fall. At that time, some
experts on distance learning and teacher education -- not unlike some of
Gallagher's dean peers -- wondered whether students would be willing to pay
the tuition of an expensive private university for an online program, among
other things.
Officials at the
university and 2Tor -- the company formed by the Princeton Review founder
John Katzman, which has provided the technology and administrative
infrastructure for the USC program -- were confident that they would be able
to tap into the market of Ivy League and other selective college graduates
who flock to programs like Teach for America in ever-growing numbers each
year but are also interested in getting a formal teaching credential right
away.
While those students
certainly have other options -- major public universities such as the
University of Wisconsin at Madison and the University of Virginia, and
private institutions like Columbia University's Teachers College and
Vanderbilt University, among others -- all of them require students to take
up residence in way that doesn't work for everyone.
Haley Hiatt,
a 2005 graduate of Brigham Young University, actually
does reside in Los Angeles -- but she's also a relatively new mother who
"didn't want to have to put [her nearly 2-year-old daughter] in day care all
the time," she says. So after first contemplating master's programs in
history at institutions like Vanderbilt and George Washington University,
and then weighing a series of graduate programs at institutions in and
around Los Angeles, Hiatt entered the first cohort of the MAT@USC program.
She now joins her fellow students in "face to face" meetings (on the
Internet, using video chat technology) twice a week, but otherwise does most
of her other course work on her own time. "I find it takes more discipline
than I needed when I was in the classroom" every day at BYU, she says.
Of the initial cohort of
144 students, about 5 percent got their bachelor's degrees from Ivy League
institutions, and about 10 percent came from the crosstown rival University
of California at Los Angeles, says Gallagher. About 10 percent hail from
historically black colleges and universities -- the proportion of students
in the online program who are black (about 11 percent) is about double the
proportion in the on-ground program, though the campus program has slightly
higher minority numbers overall. Students in the online program are somewhat
older (average age 28 vs. 25 for the face-to-face program) and the average
college grade point average is identical for both iterations of the program:
3.0, USC officials say.
Other numbers please
Gallagher even more. A greater proportion of students in the online program
are in science-related fields than is true in the campus-based program, a
heartening sign given
the pressure on American teacher education programs
to ratchet up the number of science teachers they
produce.
Continued in article
Jensen Comment
The key to this kind of explosion in online enrollments is mostly triggered by
reputation of the university in general.
Many universities are finding online programs so popular that they are now
treating them like cash cows where students pay more for online tuition than for
onsite tuition. One university that openly admits this is the University of
Wisconsin at Milwaukee (UMW).
Bob Jensen's threads on why so many students prefer online education to
onsite education (even apart from cost savings) ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
Also see
http://faculty.trinity.edu/rjensen/assess.htm#OnlineVersusOnsite
Bob Jensen's threads on distance education
training and education alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
Bob Jensen's threads on careers are at
http://faculty.trinity.edu/rjensen/Bookbob1.htm#careers
"Students prefer online courses: Classes popular with on-campus
students," CNN, January 13, 2006 ---
http://www.cnn.com/2006/EDUCATION/01/13/oncampus.online.ap/index.html
At least 2.3 million people took some kind of
online course in 2004, according to a recent survey by The Sloan Consortium,
an online education group, and two-thirds of colleges offering
"face-to-face" courses also offer online ones. But what were once two
distinct types of classes are looking more and more alike -- and often
dipping into the same pool of students.
At some schools, online courses -- originally
intended for nontraditional students living far from campus -- have proved
surprisingly popular with on-campus students. A recent study by South
Dakota's Board of Regents found 42 percent of the students enrolled in its
distance-education courses weren't so distant: they were located on campus
at the university that was hosting the online course.
Numbers vary depending on the policies of
particular colleges, but other schools also have students mixing and
matching online and "face-to-face" credits. Motives range from lifestyle to
accommodating a job schedule to getting into high-demand courses.
Classes pose challenges Washington State University
had about 325 on-campus undergraduates taking one or more distance courses
last year. As many as 9,000 students took both distance and in-person
classes at Arizona State Univesity last year.
"Business is really about providing options to
their customers, and that's really what we want to do," said Sheila Aaker,
extended services coordinator at Black Hills State.
Still, the trend poses something of a dilemma for
universities.
They are reluctant to fill slots intended for
distance students with on-campus ones who are just too lazy to get up for
class. On the other hand, if they insist the online courses are just as
good, it's hard to tell students they can't take them. And with the student
population rising and pressing many colleges for space, they may have little
choice.
In practice, the policy is often shaded. Florida
State University tightened on-campus access to online courses several years
ago when it discovered some on-campus students hacking into the system to
register for them. Now it requires students to get an adviser's permission
to take an online class.
Online, in-person classes blending Many schools,
like Washington State and Arizona State, let individual departments and
academic units decide who can take an online course. They say students with
legitimate academic needs -- a conflict with another class, a course they
need to graduate that is full -- often get permission, though they still
must take some key classes in person.
In fact, the distinction between online and
face-to-face courses is blurring rapidly. Many if not most traditional
classes now use online components -- message boards, chat rooms, electronic
filing of papers. Students can increasingly "attend" lectures by downloading
a video or a podcast.
At Arizona State, 11,000 students take fully online
courses and 40,000 use the online course management system, which is used by
many "traditional" classes. Administrators say the distinction between
online and traditional is now so meaningless it may not even be reflected in
next fall's course catalogue.
Arizone State's director of distance learning, Marc
Van Horne, says students are increasingly demanding both high-tech delivery
of education, and more control over their schedules. The university should
do what it can to help them graduate on time, he says.
"Is that a worthwhile goal for us to pursue? I'd
say 'absolutely,"' Van Horne said. "Is it strictly speaking the mission of a
distance learning unit? Not really."
Then there's the question of whether students are
well served by taking a course online instead of in-person. Some teachers
are wary, saying showing up to class teaches discipline, and that lectures
and class discussions are an important part of learning.
But online classes aren't necessarily easier.
Two-thirds of schools responding to a recent survey by The Sloan Consortium
agreed that it takes more discipline for students to succeed in an online
course than in a face-to-face one.
"It's a little harder to get motivated," said
Washington State senior Joel Gragg, who took two classes online last year
(including "the psychology of motivation"). But, he said, lectures can be
overrated -- he was still able to meet with the professor in person when he
had questions -- and class discussions are actually better online than in a
college classroom, with a diverse group exchanging thoughtful postings.
"There's young people, there's old people, there's
moms, professional people," he said. "You really learn a lot more."
Bob Jensen's threads on distance education and training alternatives are
at
http://faculty.trinity.edu/rjensen/crossborder.htm
The 2006 National Survey of Student Engagement, released November 13,
2006, for the first time offers a close look at distance education, offering
provocative new data suggesting that e-learners report higher levels of
engagement, satisfaction and academic challenge than their on-campus peers ---
http://nsse.iub.edu/NSSE_2006_Annual_Report/index.cfm
"The Engaged E-Learner," by Elizabeth Redden, Inside Higher Ed,
November 13, 2006 ---
http://www.insidehighered.com/news/2006/11/13/nsse
The 2006
National Survey of Student Engagement, released
today, for the first time offers a close look at distance education,
offering provocative new data suggesting that e-learners report higher
levels of engagement, satisfaction and academic challenge than their
on-campus peers.
Beyond the numbers, however, what institutions
choose to do with the data promises to attract extra attention to this
year’s report.
NSSE is one of the few standardized measures of
academic outcomes that most officials across a wide range of higher
education institutions agree offers something of value.Yet NSSE does not
release institution-specific data, leaving it to colleges to choose whether
to publicize their numbers.
Colleges are under mounting pressure, however, to
show in concrete, measurable ways that they are successfully educating
students, fueled in part by the recent release of the
report from the
Secretary of Education’s Commission on the Future of Higher Education,
which emphasizes the need for the development of
comparable measures of student learning. In the commission’s report and in
college-led efforts to heed the commission’s call,
NSSE has been embraced as one way to do that. In this climate, will a
greater number of colleges embrace transparency and release their results?
Anywhere between one-quarter and one-third of the
institutions participating in NSSE choose to release some data, said George
Kuh, NSSE’s director and a professor of higher education at Indiana
University at Bloomington. But that number includes not only those
institutions that release all of the data, but also those that pick and
choose the statistics they’d like to share.
In the “Looking Ahead” section that concluded the
2006 report, the authors note that NSSE can “contribute to the higher
education improvement and accountability agenda,” teaming with institutions
to experiment with appropriate ways to publicize their NSSE data and
developing common templates for colleges to use. The report cautions that
the data released for accountability purposes should be accompanied by other
indicators of student success, including persistence and graduation rates,
degree/certificate completion rates and measurements of post-college
endeavors.
“Has this become a kind of a watershed moment when
everybody’s reporting? No. But I think what will happen as a result of the
Commission on the Future of Higher Ed, Secretary (Margaret) Spelling’s
workgroup, is that there is now more interest in figuring out how to do
this,” Kuh said.
Charles Miller, chairman of the Spellings
commission, said he understands that NSSE’s pledge not to release
institutional data has encouraged colleges to participate — helping the
survey, first introduced in 1999, get off the ground and gain wide
acceptance. But Miller said he thinks that at this point, any college that
chooses to participate in NSSE should make its data public.
“Ultimately, the duty of the colleges that take
public funds is to make that kind of data public. It’s not a secret that the
people in the academy ought to have. What’s the purpose of it if it’s just
for the academy? What about the people who want to get the most for their
money?”
Participating public colleges are already obliged
to provide the data upon request, but Miller said private institutions,
which also rely heavily on public financial aid funds, should share that
obligation.
Kuh said that some colleges’ reluctance to
publicize the data stems from a number of factors, the primary reason being
that they are not satisfied with the results and feel they might reflect
poorly on the institution.
In addition, some college officials fear that the
information, if publicized, may be misused, even conflated to create a
rankings system. Furthermore, sharing the data would represent a shift in
the cultural paradigm at some institutions used to keeping sensitive data to
themselves, Kuh said.
“The great thing about NSSE and other measures like
it is that it comes so close to the core of what colleges and universities
are about — teaching and learning. This is some of the most sensitive
information that we have about colleges and universities,” Kuh said.
But Miller said the fact that the data get right to
the heart of the matter is precisely why it should be publicized. “It
measures what students get while they’re at school, right? If it does that,
what’s the fear of publishing it?” Miller asked. “If someone would say,
‘It’s too hard to interpret,’ then that’s an insult to the public.” And if
colleges are afraid of what their numbers would suggest, they shouldn’t
participate in NSSE at all, Miller said.
However, Douglas Bennett, president of Earlham
College in Indiana and chair of NSSE’s National Advisory Board, affirmed
NSSE’s commitment to opening survey participation to all institutions
without imposing any pressure that they should make their institutional
results public. “As chair of the NSSE board, we believe strongly that
institutions own their own data and what they do with it is up to them.
There are a variety of considerations institutions are going to take into
account as to whether or not they share their NSSE data,” Bennett said.
However, as president of Earlham, which releases
all of its NSSE data and even releases its accreditation reports, Bennett
said he thinks colleges, even private institutions, have a professional and
moral obligation to demonstrate their effectiveness in response to
accountability demands — through NSSE or another means a college might deem
appropriate.
This Year’s Survey
The 2006 NSSE survey, which is based on data from
260,000 randomly-selected first-year and senior students at 523 four-year
institutions(NSSE’s companion survey, the
Community College Survey of
Student Engagement, focuses on two-year colleges)
looks much more deeply than previous iterations of the survey did into the
performance of online students.
Distance learning students outperform or perform on
par with on-campus students on measures including level of academic
challenge; student-faculty interaction; enriching educational experiences;
and higher-order, integrative and reflective learning; and gains in
practical competence, personal and social development, and general
education. They demonstrate lower levels of engagement when it comes to
active and collaborative learning.
Karen Miller, a professor of education at the
University of Louisville who studies online learning, said the results
showing higher or equal levels of engagement among distance learning
students make sense: “If you imagine yourself as an undergraduate in a
fairly large class, you can sit in that class and feign engagement. You can
nod and make eye contact; your mind can be a million miles away. But when
you’re online, you’ve got to respond, you’ve got to key in your comments on
the discussion board, you’ve got to take part in the group activities.
Plus, Miller added, typing is a more complex
psycho-motor skill than speaking, requiring extra reflection. “You see what
you have said, right in front of your eyes, and if you realize it’s kind of
half-baked you can go back and correct it before you post it.”
Also, said Kuh, most of the distance learners
surveyed were over the age of 25. “Seventy percent of them are adult
learners. These folks are more focused; they’re better able to manage their
time and so forth,” said Kuh, who added that many of the concerns
surrounding distance education focus on traditional-aged students who may
not have mastered their time management skills.
Among other results from the 2006 NSSE survey:
- Those students who come to college less
well-prepared academically or from historically underrepresented groups
tend to benefit from
engagement in educationally purposeful
activities even more than their peers do.
- First-year and senior students spend an
average of about 13 to 14 hours per week preparing for classes, much
less than what faculty members say is needed.
- Student engagement is positively correlated to
grades and persistence between the first and second year of college.
- New students study fewer hours during their
first year than they expected to when starting college.
- First-year students at research universities
are more likely than students at other types of institutions to
participate in a learning community.
- First-year students at liberal arts colleges
participate in class discussions more often and view their faculty more
positively than do students at other institutions.
- Seniors at master’s level colleges and
universities give class presentations and work with their peers on
problems in class more than students at other types of institutions do.
Bob Jensen's threads on distance education and training alternatives
around the world are at
http://faculty.trinity.edu/rjensen/Crossborder.htm
Soaring Popularity of E-Learning Among Students But Not Faculty
How many U.S. students took at least on online course from a legitimate college
in Fall 2005?
More students are taking online college courses than
ever before, yet the majority of faculty still aren’t warming up to the concept
of e-learning, according to a national survey from the country’s largest
association of organizations and institutions focused on online education . . .
‘We didn’t become faculty to sit in front of a computer screen,’
Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed,
November 10, 2006 ---
http://www.insidehighered.com/news/2006/11/10/online
More students are taking online college courses
than ever before, yet the majority of faculty still aren’t warming up to the
concept of e-learning, according to a national survey from the country’s
largest association of organizations and institutions focused on online
education.
Roughly 3.2 million students took at least one
online course from a degree-granting institution during the fall 2005 term,
the Sloan Consortium said. That’s double the number who reported doing so in
2002, the first year the group collected data, and more than 800,000 above
the 2004 total. While the number of online course participants has increased
each year, the rate of growth slowed from 2003 to 2004.
The report, a joint partnership between the group
and the College Board, defines online courses as those in which 80 percent
of the content is delivered via the Internet.
The Sloan Survey of Online Learning,
“Making the Grade: Online Education in the United States, 2006,”
shows that 62 percent of chief academic officers say
that the learning outcomes in online education are now “as good as or
superior to face-to-face instruction,” and nearly 6 in 10 agree that
e-learning is “critical to the long-term strategy of their institution.”
Both numbers are up from a year ago.
Researchers at the Sloan Consortium, which is
administered through Babson College and Franklin W. Olin College of
Engineering, received responses from officials at more than 2,200 colleges
and universities across the country. (The report makes few references to
for-profit colleges, a force in the online market, in part because of a lack
of survey responses from those institutions.)
Much of the report is hardly surprising. The bulk
of online students are adult or “nontraditional” learners, and more than 70
percent of those surveyed said online education reaches students not served
by face-to-face programs.
What stands out is the number of faculty who still
don’t see e-learning as a valuable tool. Only about one in four academic
leaders said that their faculty members “accept the value and legitimacy of
online education,” the survey shows. That number has remained steady
throughout the four surveys. Private nonprofit colleges were the least
accepting — about one in five faculty members reported seeing value in the
programs.
Elaine Allen, co-author of the report and a Babson
associate professor of statistics and entrepreneurship, said those numbers
are striking.
“As a faculty member, I read that response as, ‘We
didn’t become faculty to sit in front of a computer screen,’ ” Allen said.
“It’s a very hard adjustment. We sat in lectures for an hour when we were
students, but there’s a paradigm shift in how people learn.”
Barbara Macaulay, chief academic officer at UMass
Online, which offers programs through the University of Massachusetts, said
nearly all faculty members teaching the online classes there also teach
face-to-face courses, enabling them to see where an online class could fill
in the gap (for instance, serving a student who is hesitant to speak up in
class).
She said she isn’t surprised to see data
illustrating the growing popularity of online courses with students, because
her program has seen rapid growth in the last year. Roughly 24,000 students
are enrolled in online degree and certificate courses through the university
this fall — a 23 percent increase from a year ago, she said.
“Undergraduates see it as a way to complete their
degrees — it gives them more flexibility,” Macaulay said.
The Sloan report shows that about 80 percent of
students taking online courses are at the undergraduate level. About half
are taking online courses through community colleges and 13 percent through
doctoral and research universities, according to the survey.
Nearly all institutions with total enrollments
exceeding 15,000 students have some online offerings, and about two-thirds
of them have fully online programs, compared with about one in six at the
smallest institutions (those with 1,500 students or fewer), the report
notes. Allen said private nonprofit colleges are often set in enrollment
totals and not looking to expand into the online market.
The report indicates that two-year colleges are particularly willing to be
involved in online learning.
“Our institutions tend to embrace changes a little
more readily and try different pedagogical styles,” said Kent Phillippe, a
senior research associate at the American Association of Community Colleges.
The report cites a few barriers to what it calls the “widespread adoption of
online learning,” chief among them the concern among college officials that
some of their students lack the discipline to succeed in an online setting.
Nearly two-thirds of survey respondents defined that as a barrier.
Allen, the report’s co-author, said she thinks that
issue arises mostly in classes in which work can be turned in at any time
and lectures can be accessed at all hours. “If you are holding class in real
time, there tends to be less attrition,” she said. The report doesn’t
differentiate between the live and non-live online courses, but Allen said
she plans to include that in next year’s edition.
Few survey respondents said acceptance of online
degrees by potential employers was a critical barrier — although liberal
arts college officials were more apt to see it as an issue.
November 10, 2006 reply from John Brozovsky
[jbrozovs@vt.edu]
Hi Bob:
One reason why might be what I have seen. The
in residence accounting students that I talk with take online classes
here because they are EASY and do not take much work. This would be very
popular with students but not generally so with faculty.
John
November 10, 2006 reply from Bob Jensen
Hi John,
Then there is a quality control problem whereever this is a fact. It
would be a travesty if any respected college had two or more categories of
academic standards or faculty assignments.
Variations in academic standards have long been a problem between
part-time versus full-time faculty, although grade inflation can be higher
or lower among part-time faculty. In one instance, it’s the tenure-track
faculty who give higher grades because they're often more worried about
student evaluations. At the opposite extreme it is part-time faculty who
give higher grades for many reasons that we can think of if we think about
it.
One thing that I'm dead certain about is that highly motivated students
tend to do better in online courses ceteris paribus. Reasons are mainly that
time is used more efficiently in getting to class (no wasted time driving or
walking to class), less wasted time getting teammates together on team
projects, and fewer reasons for missing class.
Also online alternatives offer some key advantages for certain types of
handicapped students ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
My opinions on learning advantages of E-Learning were heavily influenced
by the most extensive and respected study of online versus onsite learning
experiments in the SCALE experiments
using full-time resident students at the University of Illinois ---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
In the SCALE experiments cutting across 30 disciplines, it was generally
found that motivated students learned better online then their onsite
counterparts having the same instructors. However, there was no significant
impact on students who got low grades in online versus onsite treatment
groups.
I think the main problem with faculty is that online teaching tends to
burn out instructors more frequently than onsite instructors. This was also
evident in the SCALE experiments. When done correctly, online courses are
more communication intent between instructors and faculty. Also, online
learning takes more preparation time if it is done correctly.
My hero for online learning is still Amy Dunbar who
maintains high standards for everything:
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm
http://faculty.trinity.edu/rjensen/book01q4.htm#Dunbar
Bob Jensen
November 10, 2006 reply from John Brozovsky
[jbrozovs@vt.edu]
Hi Bob:
Also why many times it is not done 'right'. Not
done right they do not get the same education. Students generally do not
complain about getting 'less for their money'. Since we do not do online
classes in department the ones the students are taking are the university
required general education and our students in particular are not unhappy
with being shortchanged in that area as they frequently would have preferred
none anyway.
John
Bob Jensen's threads on open sharing and education technology are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Bob Jensen's threads on online training and education alternatives are at
http://faculty.trinity.edu/rjensen/crossborder.htm
Motivations for Distance Learning ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#Motivations
Bob Jensen's threads on the dark side of online learning and teaching are
at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
October 5, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
STUDENTS' PERCEPTIONS OF ONLINE LEARNING
"The ultimate question for educational research is
how to optimize instructional designs and technology to maximize learning
opportunities and achievements in both online and face-to-face
environments." Karl L.Smart and James J. Cappel studied two undergraduate
courses -- an elective course and a required course -- that incorporated
online modules into traditional classes. Their research of students'
impressions and satisfaction with the online portions of the classes
revealed mixed results:
-- "participants in the elective course rated
use of the learning modules slightly positive while students in the
required course rated them slightly negative"
-- "while students identified the use of
simulation as the leading strength of the online units, it was also the
second most commonly mentioned problem of these units"
-- "students simply did not feel that the
amount of time it took to complete the modules was worth what was
gained"
The complete paper, "Students' Perceptions of Online Learning: A
Comparative Study" (JOURNAL OF INFORMATION TECHNOLOGY EDUCATION, vol. 5,
2006, pp. 201-19), is available online at
http://jite.org/documents/Vol5/v5p201-219Smart54.pdf.
Current and back issues of the Journal of Information Technology
Education (JITE) [ISSN 1539-3585 (online) 1547-9714 (print)] are available
free of charge at
http://jite.org/.
The peer-reviewed journal is published annually by the Informing Science
Institute. For more information contact: Informing Science Institute, 131
Brookhill Court, Santa Rosa, California 95409 USA; tel: 707-531-4925; fax:
480-247-5724;
Web:
http://informingscience.org/.
I have heard some faculty argue that
asynchronous Internet courses just do not mesh with Trinity's on-campus mission.
The Scale Experiments at the University of Illinois indicate that many students
learn better and prefer online courses even if they are full-time, resident
students. The University of North Texas is finding out the same thing. There may
be some interest in what our competition may be in the future even for
full-time, on-campus students at private as well as public colleges and
universities.
On January 17, 2003, Ed Scribner forwarded this article from The Dallas
Morning News
Students Who Live
on Campus Choosing Internet Courses Syndicated From: The Dallas Morning
News
DALLAS - Jennifer
Pressly could have walked to a nearby lecture hall for her U.S. history class
and sat among 125 students a few mornings a week.
But the 19-year-old
freshman at the University of North Texas preferred rolling out of bed and
attending class in pajamas at her dorm-room desk. Sometimes she would wait
until Saturday afternoon.
The teen from
Rockwall, Texas, took her first college history class online this fall
semester. She never met her professor and knew only one of her 125 classmates:
her roommate.
"I take
convenience over lectures," she said. "I think I would be bored to
death if I took it in lecture."
She's part of a
controversial trend that has surprised many university officials across the
country. Given a choice, many traditional college students living on campus
pick an online course. Most universities began offering courses via the
Internet in the late 1990s to reach a different audience - older students who
commute to campus and are juggling a job and family duties.
During the last year,
UNT began offering an online option for six of its highest-enrollment courses
that are typically taught in a lecture hall with 100 to 500 students. The
online classes, partly offered as a way to free up classroom space in the
growing school, filled up before pre-registration ended, UNT officials said.
At UNT, 2,877 of the about 23,000 undergraduates are taking at least one
course online.
Nationwide, colleges
are reporting similar experiences, said Sally Johnstone, director of WCET, a
Boulder, Colo., cooperative of state higher education boards and universities
that researches distance education. Kansas State University, in a student
survey last spring, discovered that 80 percent of its online students were
full-time and 20 percent were part-time, the opposite of the college's
expectations, Johnstone said.
"Why pretend
these kids want to be in a class all the time? They don't, but kids don't come
to campus to sit in their dorm rooms and do things online exclusively,"
she said. "We're in a transition, and it's a complex one."
The UT Telecampus, a
part of the University of Texas System that serves 15 universities and
research facilities, began offering online undergraduate classes in
state-required courses two years ago. Its studies show that 80 percent of the
2,260 online students live on campus, and the rest commute.
Because they are
restricted to 30 students each, the UT System's online classes are touted as a
more intimate alternative to lecture classes, said Darcy Hardy, director of
the UT Telecampus.
"The
freshman-sophomore students are extremely Internet-savvy and understand more
about online options and availability than we could have ever imagined,"
Hardy said.
Online education
advocates say professors can reach students better online than in lecture
classes because of the frequent use of e-mail and online discussion groups.
Those who oppose the idea say they worry that undergraduates will miss out on
the debate, depth and interaction of traditional classroom instruction.
UNT, like most
colleges, is still trying to figure out the effect on its budget. The
professorial salary costs are the same, but an online course takes more money
to develop. The online students, however, free up classroom space and
eliminate the need for so many new buildings in growing universities. The
price to enroll is typically the same for students, whether they go to a
classroom or sit at their computer.
Mike Campbell, a
history professor at UNT for 36 years, does not want to teach an online class,
nor does he approve of offering undergraduate history via the Internet.
"People
shouldn't be sitting in the dorms doing this rather than walking over
here," he said. "That is based on a misunderstanding of what matters
in history."
In his class of 125,
he asks students rhetorical questions they answer en masse to be sure they're
paying attention, he said. He goes beyond the textbook, discussing such topics
as the moral and legal issues surrounding slavery.
He said he compares
the online classes to the correspondence courses he hated but had to teach
when he came to UNT in 1966. Both methods are too impersonal, he said,
recalling how he mailed assignments and tests to correspondence students.
UNT professors who
teach online say the courses are interactive, unlike correspondence courses.
Matt Pearcy has
lectured 125 students for three hours at a time.
"You'd try to be
entertaining," he said. "You have students who get bored after 45
minutes, no matter what you're doing. They're filling out notes, doing their
to-do list, reading their newspaper in front of you."
In his online U.S.
history class at UNT, students get two weeks to finish each lesson. They read
text, complete click-and-drag exercises, like one that matches terms with
historical figures, and take quizzes. They participate in online discussions
and group projects, using e-mail to communicate.
"Hands-down, I
believe this is a more effective way to teach," said Pearcy, who is based
in St. Paul, Minn. "In this setting, they go to the class when they're
ready to learn. They're interacting, so they're paying attention."
Pressly said she
liked the hands-on work in the online class. She could do crossword puzzles to
reinforce her history lessons. Or she could click an icon and see what Galileo
saw through his telescope in the 17th century.
"I took more
interest in this class than the other ones," she said.
The class, though,
required her to be more disciplined, she said, and that added stress. Two
weeks in a row, she waited till 11:57 p.m. Sunday - three minutes before the
deadline - to turn in her assignment.
Online courses aren't
for everybody.
"The thing about
sitting in my dorm, there's so much to distract me," said Trevor Shive, a
20-year-old freshman at UNT. "There's the Internet. There's TV. There's
radio."
He said students on
campus should take classes in the real, not virtual, world.
"They've got
legs; they can walk to class," he said.
Continued in the article at http://www.dallasnews.com/
January 17, 2003 response from John L. Rodi
[jrodi@IX.NETCOM.COM]
I would have added
one additional element. Today I think too many of us tend to teach accounting
the way you teach drivers education. Get in the car turn on the key and off
you go. If something goes wrong with the car you a sunk since you nothing
conceptually. Furthermore, it makes you a victim of those who do. Conceptual
accounting education teaches you to respond to choices, that is not only how
to drive but what to drive. Thanks for the wonderful analogy.
John Rodi
El Camino College
January 21 reply
from
On the subject of
technology and teaching accounting, I wonder how many of you are in the SAP
University Alliance and using it for accounting classes. I just teach advanced
financial accounting, and have not found a use for it there. However, I have
often felt that there is a place for it in intro financial, in managerial and
in AIS. On the latter, there is at least one good text book containing SAP
exercises and problems.
Although there are
over 400 universities in the world in the program, one of the areas where use
is lowest is accounting courses. The limitation appears to be related to a
combination of the learning curve for professors, together with an uncertainty
as to how it can be used to effectively teach conceptual material or otherwise
fit into curricula.
Gerald Trites,
FCA
Professor of Accounting and Information Systems
St Francis Xavier University
Antigonish, Nova Scotia
Website - http://www.stfx.ca/people/gtrites
The SAP University Alliance homepage is
at http://www.sap.com/usa/company/ua/
In today's
fast-paced, technically advanced society, universities must master the latest
technologies, not only to achieve their own business objectives
cost-effectively but also to prepare the next generation of business leaders.
To meet the demands for quality teaching, advanced curriculum, and more
technically sophisticated graduates, your university is constantly searching
for innovative ways of acquiring the latest information technology while
adhering to tight budgetary controls.
SAP™ can
help. A world leader in the development of business software, SAP is making
its market-leading, client/server-based enterprise software, the R/3®
System, available to the higher education community. Through our SAP
University Alliance Program, we are proud to offer you the world's most
popular software of its kind for today's businesses. SAP also provides setup,
follow-up consulting, and R/3 training for faculty - all at our expense. The
SAP R/3 System gives you the most advanced software capabilities used by
businesses of all sizes and in all industries around the world.
There are many ways a
university can benefit from an educational alliance with SAP. By partnering
with SAP and implementing the R/3 System, your university can:
- Take advantage
of a powerful cross-functional teaching tool
Because R/3 is a comprehensive, integrated business system with a proven
track record in the real world, it is an excellent tool for teaching
students how a business really works.
- Access advanced
software technology
Sophisticated in both architecture and functionality, R/3 is the world's
most advanced business enterprise software available today. Faculty and
students have the opportunity to stay in the forefront of business
software innovation.
- Enhance
marketability
Experience with R/3 is prized by corporate recruiters. Students
well-versed in the principles of management and the uses of R/3 are highly
marketable to SAP, our customers, and partners.
- Attract leading
educators
Prominent educators in business and information technology may find the
university's alliance with SAP attractive in terms of access to research
opportunities, advanced software, and users of R/3.
- Pursue research
opportunities
Faculty members can pursue research in many areas of business and
information technology.
- Broaden
outreach
SAP maintains an extensive network of contacts with leading consulting
firms that work as our partners in implementing R/3. What's more, our
customers are some of the largest and most prestigious corporations around
the world. As an Alliance member, your university can tap into this
network of contacts to broaden your reach into the business community.
- Stay in touch
with industry and product trends
SAP strategic business units work closely with customers, user groups,
industry associations, and leading consulting firms to ensure that we
continue to deliver leading-edge capability. As an Alliance member, your
university can keep abreast of new enterprise computing ideas and trends
through the SAP strategic business units.
January 6, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
No Significant Difference Phenomenon website
http://www.nosignificantdifference.org/
The website is a companion piece to Thomas L.
Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of
355 research reports, summaries, and papers that document no significant
differences in student outcomes between alternate modes of education
delivery.
DISTANCE LEARNING AND FACULTY CONCERNS
Despite the growing number of distance learning
programs, faculty are often reluctant to move their courses into the online
medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE
JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005)
Jennifer McLean discusses several areas that influence faculty resistance,
including: the perception that technical support and training is lacking,
the fear of being replaced by technology, and the absence of a
clearly-understood institutional vision for distance learning. The paper is
available online at
http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm
The Online Journal of Distance Learning
Administration is a free, peer-reviewed quarterly published by the Distance
and Distributed Education Center, The State University of West Georgia, 1600
Maple Street, Carrollton, GA 30118 USA; Web:
http://www.westga.edu/~distance/jmain11.html .
December 10, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu]
E-LEARNING ONLINE PRESENTATIONS
The University of Calgary Continuing Education
sponsors Best Practices in E-Learning, a website that provides a forum for
anyone working in the field to share their best practices. This month's
presentations include:
-- "To Share or Not To Share: There is No
Question" by Rosina Smith Details a new model for permitting "the
reuse, multipurposing, and repurposing of existing content"
-- "Effective Management of Distributed Online
Educational Content" by Gary Woodill "[R]eviews the history of
online educational content, and argues that the future is in distributed
content learning management systems that can handle a wide diversity of
content types . . . identifies 40 different genres of online educational
content (with links to examples)"
Presentations are in various formats, including
Flash, PDF, HTML, and PowerPoint slides. Registered users can interact with
the presenters and post to various discussion forums on the website. There is
no charge to register and view presentations. You can also subscribe to their
newsletter which announces new presentations each month. (Note: No archive of
past months' presentations appears to be on the website.)
For more information, contact: Rod Corbett, University of Calgary
Continuing Education; tel:403-220-6199 or 866-220-4992 (toll-free); email: rod.corbett@ucalgary.ca
; Web: http://elearn.ucalgary.ca/showcase/.
NEW APPROACHES TO
EVALUATING ONLINE LEARNING
"The clear
implication is that online learning is not good enough and needs to prove its
worth before gaining full acceptance in the pantheon of educational practices.
This comparative frame of reference is specious and irrelevant on several
counts . . ." In "Escaping the Comparison Trap: Evaluating Online
Learning on Its Own Terms (INNOVATE, vol. 1, issue 2, December 2004/January
2005), John Sener writes that, rather than being inferior to classroom
instruction, "[m]any online learning practices have demonstrated superior
results or provided access to learning experiences not previously
possible." He describes new evaluation models that are being used to
judge online learning on its own merits. The paper is available online at http://www.innovateonline.info/index.php?view=article&id=11&action=article.
You will need to
register on the Innovate website to access the paper; there is no charge for
registration and access.
Innovate [ISSN
1552-3233] is a bimonthly, peer-reviewed online periodical published by the
Fischler School of Education and Human Services at Nova Southeastern
University. The journal focuses on the creative use of information technology
(IT) to enhance educational processes in academic, commercial, and government
settings. Readers can comment on articles, share material with colleagues and
friends, and participate in open forums. For more information, contact James
L. Morrison, Editor-in-Chief, Innovate; email: innovate@nova.edu
; Web: http://www.innovateonline.info/.
I read the following for a scheduled program of the 29th Annual Accounting
Education Conference, October 17-18, 2003 Sponsored by the Texas CPA
Society, San Antonio Airport Hilton.
WEB-BASED AND
FACE-TO-FACE INSTRUCTION:
A COMPARISON OF LEARNING OUTCOMES IN A FINANCIAL ACCOUNTING
COURSE
Explore the results
of a study conducted over a four-semester period that focused on the same
graduate level financial accounting course that was taught using web-based
instruction and face-to-face instruction. Discuss the comparison of
student demographics and characteristics, course satisfaction, and comparative
statistics related to learning outcomes.
Doug Rusth/associate
professor/University of Houston at Clear Lake/Clear Lake
Bob Jensen's threads on asynchronous
versus synchronous learning are at http://faculty.trinity.edu/rjensen/255wp.htm
Note in particular the research outcomes of The Scale Experiment at the
University of Illinois --- http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
Once again, my advice to new faculty
is at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm
Minimum Grades as a School Policy
Question
Should a student who gets a zero (for not doing anything) or 23% (for doing
something badly) on an assignment, exam, or term paper be automatically (as a
matter of school policy) upgraded to a 60% no matter what proportion the grade
is toward a course's final grade?
Should a student get 60% even if he or she fails to show up for an examination?
Jensen Comment
This could lead to some strategies like "don't spend any time on the term paper
and concentrate on passing the final examination or vice versa."
Such strategies are probably not in the spirit of the course design, especially
when the instructor intended for students to have to write a paper.
"Time to Add Basket Weaving as a Course," by Ben Baker, The Irascible
Professor, June 22, 2008 ---
http://irascibleprofessor.com/comments-06-22-08.htm
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
If a student doesn’t come to school,” he continued,
“how can you justify passing that kid?
Fernanda Santos
"Bronx School’s Top Ranking Stirs Wider Doubts About Rating System,"
by Fernanda Santos, The New York Times, January 20, 2011 ---
http://www.nytimes.com/2011/01/21/education/21grades.html?_r=1&hpw
One of the trademarks of New York City’s school
accountability system is an equation that assigns every school a letter
grade, A through F, based on a numerical score from 1 to 100.
Bronx School’s Top Ranking Stirs Wider Doubts About
Rating System By FERNANDA SANTOS Published: January 20, 2011
Recommend Twitter Sign In to E-Mail Print Reprints
Share
One of the trademarks of New York City’s school
accountability system is an equation that assigns every school a letter
grade, A through F, based on a numerical score from 1 to 100. Enlarge This
Image Marcus Yam for The New York Times
Lynn Passarella, facing camera, the principal of
the Theater Arts Production Company School, outside the school on Thursday.
She declined to comment on the allegations about her school’s grading
practices.
A parent pulling up the latest report card for the
Theater Arts Production Company School in the Bronx would find that it
earned the score of 106.3 (including extra credit).
But that very empiric-sounding number, which was
the highest of any high school in the city, is based in part on subjective
measures like “academic expectations” and “engagement,” as measured by
voluntary parent, teacher and student surveys.
And, according to some teachers at the school, even
the more tangible factors in the score — graduation rates and credits earned
by students — were not to be taken at face value. The school has a policy
that no student who showed up for class should fail, and even some who
missed many days of school were still allowed to pass and graduate.
The Department of Education, which revealed on
Wednesday that it was investigating grading practices at the school, says
that it has a team devoted to analyzing school statistics every year and
looking for red flags like abnormal increases in student scores or dropout
rates. But a department official said that nothing in its data had raised
suspicions about the school, known as Tapco, until a whistle-blower filed a
complaint in October.
Still, in a data-driven system where letter grades
can determine a school’s fate, one big question looms over the
investigation: If the allegations turn out to be true, are they an exception
or a sign of a major fault in the school accountability system?
“The D.O.E. has absolutely created a climate for
these types of scandals to happen,” Michael Mulgrew, the president of the
teachers’ union, said in an interview. “Their culture of ‘measure everything
and question nothing a principal tells you’ makes it hard to figure out
what’s real and what’s not real inside a school.”
There are many gradations of impropriety, and it is
unclear if any of them apply to Tapco, which has about 500 students and also
includes a middle school. The school’s teacher handbook states that no
student should fail a class if he or she regularly attends, and that
students who miss work should be given “multiple opportunities for student
success and work revision.”
Current and former teachers at the school said that
even students who were regularly absent were given passing grades, in some
cases with course credits granted by the principal without a teacher’s
knowledge. Some students’ records showed credits for courses the school did
not offer.
The investigation over the irregularities at Tapco,
which began in October, also include allegations that the school’s
principal, Lynn Passarella, manipulated teacher and parent surveys, which
represent 10 of the 100 points in a school’s score. Graduation rates,
passing rates on Regents exams and earned credits constitute most of the
score.
Ms. Passarella declined to comment on the
allegations.
A spokesman for the Education Department, Matthew
Mittenthal, said: “We take every allegation of misconduct seriously, and
hope that the public can reserve judgment until the investigation is
complete.”
Sometimes, the analysts who pore over the data
uncover serious problems. Last year, the Education Department lowered the
overall scores of three high schools. At Jamaica High School in Queens, the
department discovered that the school had improperly granted credit to some
transfer students. At John F. Kennedy High School in the Bronx and W. H.
Maxwell Career and Technical Education High School in Brooklyn,
administrators could not provide documentation to explain why some students
had left the schools.
Since 2008, at least four principals and assistant
principals have been reprimanded — two retired, one served a 30-day unpaid
suspension and another paid a $6,500 fine — on charges that included
tampering with tests.
Principals can get as much as $25,000 in bonuses if
their schools meet or exceed performance targets, and some experts are
skeptical that the department’s system of checks and balances is as
trustworthy as it should be, particularly when money is at stake.
Tapco’s administrators got a bonus once, for the
2008-9 school year, when the high school’s overall score was 85.8, which
earned it an A. (The middle school scored 73.) Ms. Passarella received
$7,000, while her assistant principals got $3,500 each, according to the
Education Department. (Administrator bonuses for 2009-10 performance have
not been doled out.)
“There’s an inherent temptation towards corruption
when you create a situation where there are rewards for things like higher
test scores or favorable surveys,” said Sol Stern, an education researcher
at the Manhattan Institute, a conservative research group. “It’s an
invitation to cheating.”
One mother, Cathy Joyner, whose daughter, Sapphire
Connor, is a junior, said the school was excellent, adding that “the
children are respectful” and that the school was “concentrating on their
talents.”
But one teacher, who spoke on condition of
anonymity because he said he feared for his job, gave a different account.
For teachers who do not do what the principal wants, the teacher said, “it’s
difficult to get tenure.”
“If a student doesn’t come to school,” he
continued, “how can you justify passing that kid?"
Wow: 97% of Elementary NYC Public Students Get A or B Grades --- There
must be higher IQ in the water!
"City Schools May Get Fewer A’s," by Jennifer Medina, The New York Times,
January 28, 2010 ---
http://www.nytimes.com/2010/01/30/education/30grades.html?hpw
Michael Mulgrew, the president of the United
Federation of Teachers, criticized the decision to reduce the number of
schools that receive top grades.
Continued in article
Issues in Group
Grading
December 6, 2004 message from Glen Gray
[glen.gray@CSUN.EDU]
When I have students
do group projects, I require each team member complete a peer review form
where the team member evaluates the other team members on 8 attributes using a
scale from 0 to 4. On this form they also give their team members an overall
grade. In a footnote it is explained that an “A” means the team member
receives the full team grade; a “B” means a 10% reduction from the team
grade; a “C” means 20% discount; a “D” means 30% discount; “E”
means 40%, and an “F” means a 100% discount (in other words, the team
member should get a zero).
I assumed that the
form added a little peer pressure to the team work process. In the past,
students were usually pretty kind to each other. But now I have a situation
where the team members on one team have all given either E’s of F’s to one
of their team members. Their written comments about this guy are all pretty
consistent.
Now, I worried if I
actually enforce the discount scale, things are going to get messy and the
s*** is going to hit the fan. I’m going to have one very upset student. He
is going to be mad at his fellow teammates.
Has anyone had
similar experience? What has the outcome been? Is there a confidentially issue
here? In other words, are the other teammates also going to be upset that I
revealed their evaluations? Is there going to be a lawsuit coming over the
horizon?
Glen L. Gray, PhD,
CPA
Dept. of Accounting & Information Systems
College of Business & Economics
California State University, Northridge
Northridge, CA 91330-8372
http://www.csun.edu/~vcact00f
Most of the replies to the message
above encouraged being clear at the beginning that team evaluations would affect
the final grade and then sticking to that policy.
December 5, 2004 reply from David Fordham, James Madison University
[fordhadr@JMU.EDU]
Glen, the fact that
you are in California, by itself, makes it much more difficult to predict the
lawsuit question. I've seen some lawsuits (and even worse, legal outcomes)
from California that are completely unbelievable... Massachussetts too.
But that said, I can
share my experience that I have indeed given zero points on a group grade to
students where the peer evaluations indicated unsatisfactory performance. My
justification to the students in these "zero" cases has always been,
"it was clear from your peers that you were not part of the group effort,
and thus have not earned the points for the group assignment".
I never divulge any
specific comments, but I do tell the student that I am willing to share the
comments with an impartial arbiter if they wish to have a third party confirm
my evidence. To date, no student has ever contested the decision.
Every other semester
or so, I have to deduct points to some degree for unsatisfactory work as
judged by peers. So far, I've had no problems making it stick, and in most
cases, the affected student willingly admits their deficiency, although
usually with excuses and rationales.
But I'm not in
California, and the legal precedents here are unlike those in your neck of the
woods.
If I were on the west
coast, however, I'd probably be likely to at least try to stick to my
principles as far as my university legal counsel would allow. Then, if my
counsel didn't support me, I'd look for employment in a part of the country
with a more reasonable legal environment (although that is getting harder to
find every day).
Good luck,
David Fordham
December 5, 2004 reply from Amy Dunbar
Sometimes groups do
blow up. Last summer I had one group ask me to remove a member. Another group
had a nonfunctioning member, based on the participation scores. I formed an
additional group comprised of just those two. They finally learned how to
work. Needless to say they weren’t happy with me, but the good thing about
teaching is that every semester we get a fresh start!
Another issue came up
for the first time, at least that I noticed. I learned that one group made a
pact to rate each other high all semester long regardless of work level, and I
still am not sure how I am going to avoid that problem next time around. The
agreement came to light when one of the students was upset that he did so
poorly on my exams. He told his senior that he had no incentive to do the
homework because he could just get the answers from the other group members,
and he didn’t have to worry about being graded down because of the
agreement. The student was complaining that the incentive structure I set up
hurt him because he needed more push do the homework. The senior told me after
the class ended. Any suggestions?
TEXAS IS GOING TO THE
ROSE BOWL!!!!!!!!! Go Horns! Oops, that just slipped out.
Amy Dunbar
A Texas alum married to a Texas fanatic
December 6, 2004 reply from Tracey Sutherland
[tracey@AAAHQ.ORG]
Glen, My first
thought on reading your post was that if things get complicated it could be
useful to have a context for your grading policy that clearly establishes that
it falls within common practice (in accounting and in cooperative college
classrooms in general). Now you've already built some context from within
accounting by gathering some responses here from a number of colleagues for
whom this is a regular practice. Neal's approach can be a useful counterpart
to peer evaluation for triangulation purposes -- sometimes students will
report that they weren't really on-point for one reason or another (I've done
this with good result but only with upper-level grad students). If the issue
becomes more complicated because the student challenges your approach up the
administrative ladder, you could provide additional context for the
consistency of your approach in general by referencing the considerable body
of literature on these issues in the higher education research literature --
you are using a well-established approach that's been frequently tested. A
great resource if you need it is Barbara Millis and Phil Cottell's book
"Cooperative Learning for Higher Education Faculty" published by
Oryx Press (American Council on Education Series on Higher Education). They do
a great job of annotating the major work in the area in a short, accessible,
and concise book that also includes established criteria used for evaluating
group work and some sample forms for peer assessment and self-assessment for
group members (also just a great general resource for well-tested
cooperative/group activities -- and tips for how to manage implementing them).
Phil Cottell is an accounting professor (Miami U.) and would be a great source
of information should you need it.
Your established
grading policy indicates that there would be a reduction of grade when team
members give poor peer evaluations -- which wouldn't necessarily mean that you
would reveal individual's evaluations but that a negative aggregate evaluation
would have an effect -- and that would protect confidentiality consistently
with your policy. It seems an even clearer case because all group members have
given consistently negative evaluations -- as long as it's not some weird
interpersonal thing -- something that sounds like that would be a red flag for
the legal department. I hate it that we so often worry about legal
ramifications . . . but then again it pays to be prepared!
Peace of the
season,
Tracey
December 6, 2004 reply from Bob Jensen
I once listened to an award winning
AIS professor from a very major university (that after last night won't be
going to the Orange Bowl this year) say that the best policy is to promise
everybody an A in the course. My question then is what the point of the
confidential evaluations would be other than to make the professor feel bad at
the end of the course?
Bob Jensen
Too Good to Grade: How can these
students get into doctoral programs and law school if their prestigious
universities will not disclose grades and class rankings? Why grade at all
in this case?
Students at some top-ranked B-schools have a secret. It's something they
can't share even if it means losing a job offer. It's one some have worked hard
for and should be proud of, but instead they keep it to themselves. The secret
is their grades.
At four of the nation's 10 most elite B-schools --
including Harvard, Stanford, and Chicago -- students have adopted policies that
prohibit them or their schools from disclosing grades to recruiters. The idea is
to reduce competitiveness and eliminate the risk associated with taking
difficult courses. But critics say the only thing nondisclosure reduces is one
of the most important lessons B-schools should teach: accountability (see
BusinessWeek, 9/12/05,
"Join the Real World, MBAs").
It's a debate that's flaring up on B-school campuses
across the country. (For more on this topic, log on to our
B-Schools Forum.) And nowhere is it more
intense than at University of Pennsylvania's Wharton School, where students,
faculty, and administrators have locked horns over a school-initiated proposal
that would effectively end a decade of grade secrecy at BusinessWeek's No.
3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top
25% of each class -- in effect outing everyone else. It was motivated, says
Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing
effects" of grade nondisclosure, which he says faculty blame for lackluster
academic performance and student disengagement.
"Campus Confidential:
Four top-tier B-schools don't disclose grades. Now that policy is under attack,"
Business Week, September 12, 2005 ---
http://snipurl.com/BWSept122
Too Good to Grade: How can these
students get into doctoral programs and law schools if their prestigious
universities will not disclose grades and class rankings? Why grade at all
in this case?
Students at some top-ranked B-schools have a secret. It's something they
can't share even if it means losing a job offer. It's one some have worked hard
for and should be proud of, but instead they keep it to themselves. The secret
is their grades.
At four of the nation's 10 most elite B-schools --
including Harvard, Stanford, and Chicago -- students have adopted policies that
prohibit them or their schools from disclosing grades to recruiters. The idea is
to reduce competitiveness and eliminate the risk associated with taking
difficult courses. But critics say the only thing nondisclosure reduces is one
of the most important lessons B-schools should teach: accountability (see
BusinessWeek, 9/12/05,
"Join the Real World, MBAs").
It's a debate that's flaring up on B-school campuses
across the country. (For more on this topic, log on to our
B-Schools Forum.) And nowhere is it more
intense than at University of Pennsylvania's Wharton School, where students,
faculty, and administrators have locked horns over a school-initiated proposal
that would effectively end a decade of grade secrecy at BusinessWeek's No.
3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top
25% of each class -- in effect outing everyone else. It was motivated, says
Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing
effects" of grade nondisclosure, which he says faculty blame for lackluster
academic performance and student disengagement.
"Campus Confidential:
Four top-tier B-schools don't disclose grades. Now that policy is under attack,"
Business Week, September 12, 2005 ---
http://snipurl.com/BWSept122
Jensen Comment: Talk about moral hazard. What if 90% of the
applicants claim to be straight A graduates at the very top of the class,
and nobody can prove otherwise?
September 2, 2005 message from Denny Beresford
[DBeresford@TERRY.UGA.EDU]
Bob,
The impression I have (perhaps I'm misinformed) is that most MBA classes
result in nearly all A's and B's to students. If that's the case, I wonder
how much a grade point average really matters.
Denny Beresford
September 2, 2005 reply from Bob Jensen
One of the schools, Stanford,
in the 1970s lived with the Van Horn rule that dictated no more than 15% A
grades in any MBA class. I guess grade inflation has hit the top
business schools. Then again, maybe the students are just better than
we were.
I added the following to my
Tidbit on this:
Talk about moral hazard. What
if 90% of the applicants claim to be straight A graduates at the very top
of the class, and nobody can prove otherwise?
After your message Denny, I
see that perhaps it's not moral hazard. Maybe 90% of the students actually
get A grades in these business schools, in which nearly 90% would graduate
summa cum laude.
What a joke! It must be
nice teaching students who never hammer you on teaching evaluations because
you gave them a C or below.
The crucial quotation is
"faculty blame for lackluster academic performance and student
disengagement." Isn't this a laugh if they all get A and B grades for
"lackluster academic performance and student disengagement."
I think these top schools are
simply catering to their customers!
Bob Jensen
Harvard Business School Eliminates Ban on a Graduate's
Discretionary Disclosure of Grades
The era of the second-year slump at
Harvard Business School is over. Or maybe the days of
student cooperation are over. Despite strong student
opposition, the business school announced Wednesday that it
was ending its ban on sharing grades with potential
employers. Starting with new students who enroll in the
fall, M.B.A. candidates can decide for themselves whether to
share their transcripts. The ban on grade-sharing has been
enormously popular with students since it was adopted in
1998. Supporters say that it discouraged (or at least kept
to a reasonable level) the kind of cut-throat competition
for which business schools are known. With the ban, students
said they were more comfortable helping one another or
taking difficult courses. But a memo sent to students by Jay
O. Light, the acting dean, said that the policy was wrong.
“Fundamentally, I believe it is inappropriate for HBS to
dictate to students what they can and cannot say about their
grades during the recruiting process. I believe you and your
classmates earn your grades and should be accountable for
them, as you will be accountable for your performance in the
organizations you will lead in the future,” he wrote.
Scott Jaschik, "Survival of the Fittest MBA," Inside
Higher Ed, December 16, 2005 ---
http://www.insidehighered.com/news/2005/12/16/grades
Bob Jensen's threads on Controversies in Higher Education are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Software for faculty and departmental performance evaluation and
management
May 30, 2006 message from Ed Scribner
[escribne@NMSU.EDU]
A couple of months ago I asked for any experiences
with systems that collect faculty activity and productivity data for
multiple reporting needs (AACSB, local performance evaluation, etc.). I said
I'd get back to the list with a summary of private responses.
No one reported any significant direct experience,
but many AECMers provided names and e-mail addresses of [primarily]
associate deans who had researched products from Sedona and Digital
Measures. Since my associate dean was leading the charge, I just passed
those addresses on to her.
We ended up selecting Digital Measures mainly
because of our local faculty input, the gist of which was that it had a more
professional "feel." My recollection is that the risk of data loss with
either system is acceptable and that the university "owns" the data. I
understand that a grad student is entering our data from the past five years
to get us started.
Ed Scribner
New Mexico State University
Las Cruces, NM, USA
Jensen Comment
The Digital Measures homepage is at
http://www.digitalmeasures.com/
Over 100 universities use Digital Measures'
customized solutions to connect administrators, faculty, staff, students,
and alumni. Take a look at a few of the schools and learn more about Digital
Measures.
Free from the Huron Consulting Group (Registration Required) ---
http://www.huronconsultinggroup.com/
Effort Reporting Technology for Higher Education ---
http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf
-
Question Mark (Software for Test and
Tutorial Generation and Networking)
- Barron's Home Page
- Metasys Japan Software
- Question Mark America home page
- Using ExamProc for
OMR Exam Marking
- Vizija d.o.o. -
Educational Programs - Wisdom Tools
Yahoo Links
TechKnowLogia --- http://www.techknowlogia.org/
TechKnowLogia
is an international online journal that provides policy makers,
strategists, practitioners and technologists at the local, national and
global levels with a strategic forum to:
Explore the vital
role of different information technologies (print, audio, visual
and digital) in the development of human and knowledge capital;
Share policies,
strategies, experiences and tools in harnessing technologies for
knowledge dissemination, effective learning, and efficient
education services;
Review the latest
systems and products of technologies of today, and peek into the
world of tomorrow; and
Exchange information
about resources, knowledge networks and centers of expertise.
- Do
Technologies Enhance Learning?
- Brain
Research, Learning and Technology
- Technologies
at Work for: Critical Thinking, Science Instruction,
Teaching Practices, etc...
- Interactive
TV as an Educational Tool
- Complexity
of Integrating ICTs into Curriculum & Exams
- Use of
Digital Cameras to Enhance Learning
- Creating
Affordable Universal Internet Access
Bob Jensen's threads on education technologies are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
"What's the Best Q&A Site?" by Wade Roush, MIT's
Technology Review, December 22, 2006 ---
http://www.technologyreview.com/InfoTech/17932/
Magellan Metasearch ---
http://sourceforge.net/projects/magellan2/
Many educators would like to put more materials on
the web, but they are concerned about protecting access to all or parts of
documents. For example, a professor may want to share a case with the world but
limit the accompanying case solution to selected users. Or a professor may want to
make certain lecture notes available but limit the access of certain copyrighted portions
to students in a particular course. If protecting parts of your documents is of
great interest, you may want to consider NetCloak from Maxum at http://www.maxum.com/ . You can download a free
trial version.
NetCloak Professional Edition
combines the power of Maxum's classic combo, NetCloak and NetForms, into a single CGI
application or WebSTAR API plug-in. With NetCloak Pro, you can use HTML forms on your web
site to create or update your web pages on the fly. Or you can store form data in text
files for importing into spreadsheets or databases off-line. Using NetCloak Pro, you can
easily create online discussion forums, classified ads, chat systems, self-maintaining
home pages, frequently-asked-question lists, or online order forms!
NetCloak Pro also gives your web
site access to e-mail. Users can send e-mail messages via HTML forms, and NetCloak Pro can
create or update web pages whenever an e-mail message is received by any e-mail address.
Imagine providing HTML archives of your favorite mailing lists in minutes!
NetCloak Pro allows users to
"cloak" pages individually or "cloak" individual paragraphs or text
strings. The level of security seems to be much higher than scripted passwords such
as scripted passwords in JavaScript or VBScript.
Eric Press led me to http://www.maxum.com/NetCloak/FAQ/FAQList.html
(Thank you Eric, and thanks for the "two lunches")
Richard Campbell responded as follows:
Alternatives to using Netcloak: 1.
Symantec http://www.symantec.com has a free
utility called Secret which will password-protect any type of file.
2. Winzip http://www.winzip.com has a another shareware
utility called Winzip - Self-Extractor, which has a password protect capability. The
advantage to this approach is that you can bundle different file types (.doc, xls) , zip
them and you can have them automatically install to a folder that you have named. If you
have a shareware install utility that creates a setup.exe routine, you also can have it
install automatically on the student's machine. The price of this product is about $30.
Full Disclosure to Consumers of Higher Education (including assessment
of colleges and the Spellings Commission Report) ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
Dropping a Bomb on Accreditation
The most provocative vision for changing accreditation
put forward at Tuesday’s meeting came from Robert C. Dickeson, president
emeritus of the University of Northern Colorado. Dickeson’s presentation was
loaded with irony, in some ways; a position paper he wrote in 2006 as a
consultant to Margaret Spellings’s Commission on the Future of Higher Education
was harshly critical of the current system of accreditation (calling it rife
with conflicts of interest and decidedly lacking in transparency) and suggested
replacing the regional accrediting agencies with a “national accreditation
foundation” that would establish national standards for colleges to meet.
Dickeson’s presentation Tuesday acknowledged that there remained legitimate
criticisms of accreditation’s rigor and agility, noting that many colleges and
accrediting agencies still lacked good information about student learning
outcomes “40 years after the assessment movement began in higher education.”
Doug Lederman, "Whither Accreditation," Inside Higher Ed, January 28,
2009 ---
http://www.insidehighered.com/news/2009/01/28/accredit
Dickerson's 2006 Position Paper "Dropping a Bomb on Accreditation" ---
http://insidehighered.com/news/2006/03/31/accredit
Here’s something
that may be useful when assessing a doctoral program. Note to key items listed
near the end of the document.
From the
Chronicle of Higher Education, November 7, 2008 ---
http://chronicle.com/weekly/v55/i11/11a00104.htm?utm_source=wb&utm_medium=en
Provosts around the country are anticipating — and
some are surely dreading — the long afternoons when they will go over
national rankings data for their graduate departments. No later than this
winter, after many delays, the National Research Council plans to release
its assessments of American doctoral programs.
Student-faculty ratios, time to degree, stipends,
faculty research productivity, and citation counts: Those numbers and many
others will be under national scrutiny.
But one university couldn't wait. Last year,
prodded by anxious faculty members worried about low Ph.D. production, Ohio
State University conducted a thorough review of its doctoral programs,
drawing heavily on data that its departments had compiled for the council's
questionnaire. The Ohio State experience provides a window on what may be
coming nationally.
The evaluations had teeth. Of the 90 doctoral
programs at Ohio State, five small ones were tagged as "candidates for
disinvestment or elimination": comprehensive vocational education (a
specialty track in the college of agriculture), soil science, welding
engineering, rehabilitation services, and technology education. Another 29
programs were instructed to reassess or restructure themselves.
Some programs got good news, however. Twenty-nine
that were identified as "high quality" or "strong" will share hundreds of
thousands of dollars in new student-fellowship subsidies.
Many faculty members say the assessments provided a
long-overdue chance for Ohio State to think strategically, identifying some
fields to focus on and others that are marginal. But the process has also
had its share of bumps. The central administration concluded that certain
colleges, notably the College of Biological Sciences, were too gentle in
their self-reports. And some people have complained that the assessments
relied too heavily on "input" variables, such as students' GRE scores.
Despite those concerns, the dean of Ohio State's
Graduate School, Patrick S. Osmer, says the assessment project has exceeded
his expectations. He hopes it can serve as a model for what other
institutions can do with their doctoral data. "The joy of working here," he
says, "is that we're trying to take a coordinated, logical approach to all
of these questions, to strengthen the university."
A Faculty Mandate
The seeds of the assessment project were planted in
2005, when a high-profile faculty committee issued a report warning that
Ohio State was generating proportionally fewer Ph.D.'s than were the other
Big Ten universities. "The stark fact is that 482 Ph.D. degrees ... granted
in 2003-4 is far below the number expected from an institution the size and
(self-declared) quality of OSU," the report read. (The 482 figure excluded
doctorates awarded by Ohio State's college of education.) At the University
of Wisconsin at Madison, for example, each tenure-track faculty member
generated an average of 0.4 Ph.D.'s each year. At Ohio State, the figure was
only 0.267.
The committee recommended several steps: Give the
central administration more power in graduate-level admissions. Organize
stipends, fellowships, and course work in ways that encourage students to
complete their doctorates in a timely manner. Stop giving doctoral-student
subsidies to students who are likely to earn only master's degrees. And
distribute subsidies from the central administration on a strategic basis,
rewarding the strongest programs and those with the most potential for
improvement.
"One thing that motivated all of this," says Paul
Allen Beck, a professor of political science and a former dean of social and
behavioral sciences at Ohio State, "was a feeling that the university had
not invested enough in Ph.D. education. Our universitywide fellowships were
not at a competitive level. We really felt that we should try to do a better
job of concentrating our university investments on the very best programs."
Ohio State officials had hoped to use the National
Research Council's final report itself for their evaluations. But after its
release was postponed for what seemed like the sixth or seventh time, they
moved forward without it.
In September 2007, Mr. Osmer asked the deans of
Ohio State's 18 colleges to report data about their doctoral students'
median time to degree, GRE scores, stipends, fellowships, job-placement
outcomes, and racial and ethnic diversity.
Many of those numbers were easy to put together,
because departments had compiled them during the previous year in response
to the council's questionnaire. But job placements — a topic that will not
be covered in the NRC report — were something that certain Ohio State
programs had not previously tracked.
"This was a huge new project for us and for some of
our departments as well," says Julie Carpenter-Hubin, director of
institutional research and planning. "But simply going around and talking to
faculty took care of most of it. It's really remarkable the degree to which
faculty members stay in touch with their former doctoral students and know
where they are. I think we wound up with placement data for close to 80
percent of our Ph.D. graduates, going 10 years back."
Defending Their Numbers
The reports that Ohio State's colleges generated
last fall contained a mixture of quantitative data — most prominently GRE
scores and time-to-degree numbers — and narrative arguments about their
departments' strengths. The College of Social and Behavioral Sciences, for
example, noted that several recent Ph.D.s in economics, political science,
and psychology had won tenure-track positions at Ivy League institutions.
When they had to report poor-looking numbers,
departments were quick to cite reasons and contexts. The anthropology
program said its median time to degree of 7.3 years might seem high when
compared with those of other degree courses, but is actually lower than the
national average for anthropology students, who typically spend years doing
fieldwork. Economics said its retention-and-completion rate, which is less
than 50 percent, might look low but is comparable to those in other highly
ranked economics departments, where students are often weeded out by
comprehensive exams at the end of the first year.
In April 2008, a committee appointed and led by Mr.
Osmer, the graduate-school dean, digested the colleges' reports and issued a
report card, ranking the 90 doctoral programs in six categories. (See table
on following page.)
The panel did not meekly accept the colleges'
self-evaluations. The College of Biological Sciences, for example, had
reported that it lacked enough data to draw distinctions among its programs.
But the committee's report argued, among other things, that the small
program in entomology appeared to draw relatively little outside research
support, and that its students had lower GRE scores than those in other
biology programs. (Entomology and all other doctoral programs in biology
were among the 29 programs that Mr. Osmer's committee deemed in need of
reassessment or restructuring.)
The report's points about entomology — and about
the general organization of the college — were controversial among the
faculty members, says Matthew S. Platz, a professor of chemistry who became
interim dean of biological sciences in July. But faculty members have taken
the lead in developing new designs for the college, he says, to answer many
of the central administration's concerns.
"I'm delighted by the fact that at the grass-roots
level, faculty members have been talking about several types of
reorganization," Mr. Platz says. "And I'm hopeful that two or three of them
will be approved by the end of the year."
'Unacceptably Low Quality'
The five doctoral degrees named as candidates for
the ax have also stirred controversy.
Jerry M. Bigham, a professor of soil science and
director of Ohio State's School of Environment and Natural Resources, says
he was disappointed but not entirely surprised by the committee's suggestion
that his program could be terminated. The soil-science program has existed
on its own only since 1996; before that it was one of several
specializations offered by the doctoral program in agronomy.
"In essence, we've had students and faculty members
spread across three programs," he says. So he understands why the university
might want to place soil sciences under a larger umbrella, in order to
reduce overhead and streamline the administration.
At the same time, he says, several people were
offended by the Osmer committee's blunt statement that soil-science students
are of "unacceptably low quality."
The panel's analysis of the students' GRE scores
was "just a snapshot, and I think it really has to be viewed with caution,"
Mr. Bigham says. "Even though we're a small program, our students have won
university fellowships and have been recognized for their research. So I
would really object to any characterization of our students as being weak."
The final verdict on the five programs is
uncertain. The colleges that house them might propose folding them into
larger degree courses. Or they might propose killing them outright. All such
proposals, which are due this fall, are subject to approval by the central
administration.
Jason W. Marion, president of the university's
Council of Graduate Students, says its members have generally supported the
doctoral-assessment project, especially its emphasis on improving stipends
and fellowships. But some students, he adds, have expressed concern about an
overreliance on GRE scores at the expense of harder-to-quantify "output"
variables like job-placement outcomes.
Mr. Osmer replies that job placement actually has
been given a great deal of weight. "Placing that alongside the other
variables really helped our understanding of these programs come together,"
he says.
At this summer's national workshop sessions of the
Council of Graduate Schools, Mr. Osmer was invited to lecture about Ohio
State's assessment project and to discuss how other institutions might make
use of their own National Research Council data. William R. Wiener, a vice
provost at Marquette University who also spoke on Mr. Osmer's panel, calls
the Ohio State project one example of how universities are becoming smarter
about assessments.
"Assessments need to have reasonable consequences,"
Mr. Wiener says. "I think more universities realize that they need to create
a culture of assessment, and that improving student learning needs to
permeate everything that we do."
Mr. Beck, the former social-sciences dean at Ohio
State, says that even for relatively strong departments — his own
political-science department was rated "high quality" by Mr. Osmer's
committee — a well-designed assessment process can be eye-opening.
"These programs just kind of float along, guided by
their own internal pressures," says Mr. Beck. But "the departments here were
forced to take a hard look at themselves, and they sometimes saw things that
they didn't like."
HOW
OHIO
STATE U. RATES DOCTORAL PROGRAMS Until recently, Ohio
State University used a simple, quantity-based formula to distribute
student-support money to its doctoral programs. In essence, the more
credit hours taken by students in a program each quarter, the more
money the program collected. But last year the university introduced
quality-control measures. It used them to make choices about which
programs to invest in — and, more controversially, which ones to
eliminate.
Measures used:
- Students' time to
degree Students' GRE scores
- Graduates' job
placements, 1996-2005 Student diversity
- The program's share
of Ph.D. production (both nationally and among Ohio State's
peers)
- "Overall program
quality and centrality to the university's mission"
Resulting ratings:
- High quality: 12 programs
- Strong: 17 programs
- Good: 16 programs
- New and/or in transition; cannot be fully
assessed: 11 programs
- Must reassess and/or restructure: 29
programs
- Candidates for disinvestment or
elimination: 5 programs
What the ratings mean:
- Programs rated "high quality" and "strong"
will share new funds from the central administration for
graduate-student stipends.
- "Good" programs have been asked to make
improvements in specific areas. Their support will not
significantly change.
- Colleges with doctoral programs that were
deemed in need of reassessment or restructuring were asked to
submit new strategic plans this fall. Those plans are subject to
approval by Ohio State's provost.
- The new strategic plans will also deal
with programs deemed candidates for disinvestment or
elimination. Those programs might be folded into larger degree
courses, or killed outright.
|
Bob
Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
"Minnesota Colleges Seek Accountability by
the Dashboard Light," by Paul Basken, Chronicle of Higher Education,
June 18, 2008 ---
http://chronicle.com/daily/2008/06/3423n.htm
When your car starts sputtering, it's easy to look
at the dashboard and see if you're running out of gas. What if you could do
the same with your local college?
Minnesota's system of state colleges and
universities believes it can show the way.
After two years of preparation, the 32-college
system unveiled on Tuesday its new Accountability Dashboard. The service is
based on a Web site that displays a series of measures—tuition rates,
graduates' employment rates, condition of facilities—that use
speedometer-type gauges to show exactly how the Minnesota system and each of
its individual colleges is performing.
The idea is in response to the growing demand,
among both policy makers and the public, for colleges to provide more useful
and accessible data about how well they are doing their jobs.
"There's a great call across the country for
accountability and transparency, and I don't think it's going to go away,"
said James H. McCormick, chancellor of the 374,000-student system. "It's
just a new way of doing business."
Shining a Light
The information in the new format was already
publicly available. But its presentation in the dashboard format, along with
comparisons with statewide and national figures as well as the system's own
goals, will put pressure on administrators and faculty members for
improvement, Mr. McCormick and other state education officials told
reporters.
"The dashboard shines a light on where we need to
improve," said Ruth Grendahl, vice chairman of the Board of Trustees of the
Minnesota State Colleges and Universities.
Among the areas the dashboard already indicates as
needing improvement is the cost of attending Minnesota's state colleges. The
gauges for tuition and fees at all 30 of the system's two-year institutions
show needles pointing to "needs attention," a reflection of the fact that
their costs are higher than those of 80 percent of their peers nationwide.
The dashboard shows the system faring better in
other areas, such as licensure-examination pass rates and degree-completion
rates, in which the average figures are in the "meets expectations" range.
Other measures, like "innovation" and "student engagement," don't yet show
results, as the necessary data are still being collected or the criteria
have not yet been defined.
Tool of Accountability
Many private companies already use dashboard-type
displays in their computer systems to help monitor business performance, but
the data typically serve an internal function rather than being a tool for
public accountability.
The Minnesota dashboard stems in part from the
system's work through the National Association of System Heads, or NASH, on
a project to improve the education of minority and low-income students. The
project is known as Access to Success.
Continued in article
Jensen Comment
Those in my generation might appreciate the fact that this car has a "NASH"
dashboard. The problem is that when a car's dashboard signals troubles such as
oil leaks and overheating, owner's can easily trade in or junk a clunker
automobile. This is not so simple in the politics of state universities.
May 2, 2008 message from Carolyn Kotlas
[kotlas@email.unc.edu]
REPORT ON E-LEARNING RETURNS ON INVESTMENT
"Within the academic community there remains a
sizable proportion of sceptics who question the value of some of the tools
and approaches and perhaps an even greater proportion who are unaware of the
full range of technological enhancements in current use. Amongst senior
managers there is a concern that it is often difficult to quantify the
returns achieved on the investment in such technologies. . . . JISC infoNet,
the Association for Learning Technology (ALT) and The Higher Education
Academy were presented with the challenge of trying to make some kind of
sense of the diversity of current e-learning practice across the sector and
to seek out evidence that technology-enhanced learning is delivering
tangible benefits for learners, teachers and institutions."
The summary of the project is presented in the
recently-published report, "Exploring Tangible Benefits of e-Learning: Does
Investment Yield Interest?" Some benefits were hard to measure and quantify,
and the case studies were limited to only sixteen institutions. However,
according to the study, there appears to be "clear evidence" of many good
returns on investment in e-learning. These include improved student pass
rates, improved student retention, and benefits for learners with special
needs.
A copy of the report is available at
http://www.jiscinfonet.ac.uk/publications/camel-tangible-benefits.pdf
A two-page briefing paper is available at
http://www.jisc.ac.uk/media/documents/publications/bptangiblebenefitsv1.pdf
JISC infoNet, a service of the Joint Information
Systems Committee, "aims to be the UK's leading advisory service for
managers in the post-compulsory education sector promoting the effective
strategic planning, implementation and management of information and
learning technology." For more information, go to
http://www.jiscinfonet.ac.uk/
Association for Learning Technology (ALT), formed
in 1993, is "the leading UK body bringing together practitioners,
researchers, and policy makers in learning technology." For more
information, go to
http://www.alt.ac.uk/
The mission of The Higher Education Academy, owned
by two UK higher education organizations (Universities UK and GuildHE), is
to "help institutions, discipline groups, and all staff to provide the best
possible learning experience for their students." For more information, go
to
http://www.heacademy.ac.uk/
Bob Jensen's threads on asynchronous learning are at
http://faculty.trinity.edu/rjensen/255wp.htm
Also see
http://faculty.trinity.edu/rjensen/265wp.htm
Assessment Issues ---
http://faculty.trinity.edu/rjensen/assess.htm
Threads on Costs and Instructor Compensation (somewhat outdated) ---
http://faculty.trinity.edu/rjensen/distcost.htm
Bob Jensen's education technology threads are linked at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Question
Guess which parents most strongly object to grade inflation?
Hint: Parents Say Schools Game System, Let Kids Graduate Without Skills
The Bredemeyers represent a new voice in special
education: parents disappointed not because their children are failing, but
because they're passing without learning. These families complain that schools
give their children an easy academic ride through regular-education classes,
undermining a new era of higher expectations for the 14% of U.S. students who
are in special education. Years ago, schools assumed that students with
disabilities would lag behind their non-disabled peers. They often were taught
in separate buildings and left out of standardized testing. But a combination of
two federal laws, adopted a quarter-century apart, have made it national policy
to hold almost all children with disabilities to the same academic standards as
other students.
John Hechinger and Daniel Golden, "Extra Help: When Special Education Goes
Too Easy on Students," The Wall Street Journal, August 21, 2007, Page A1
---
http://online.wsj.com/article/SB118763976794303235.html?mod=todays_us_page_one
Bob Jensen's threads on grade inflation are at
http://faculty.trinity.edu/rjensen/Assess.htm#GradeInflation
Bob Jensen's fraud updates are at
http://faculty.trinity.edu/rjensen/FraudUpdates.htm
Question
What Internet sites help you compare neighboring K-12 schools?
"Grading Neighborhood Schools: Web Sites Compare A Variety of Data, Looking
Beyond Scores," by Katherine Boehret, The Wall Street Journal, February
20, 2008; Page D6 ---
I performed various school queries
using
Education.com
Inc., GreatSchools Inc.'s
GreatSchools.net and
SchoolMatters.com by typing in a ZIP Code, city,
district or school name. Overall, GreatSchools and Education.com offered the
most content-packed environments, loading their sites with related articles
and offering community feedback on education-related issues by way of blog
posts or surveys. And though GreatSchools is 10 years older than
Education.com, which made its debut in June, the latter has a broader
variety of content and considers its SchoolFinder feature -- newly available
as of today -- just a small part of the site.
Both Education.com and
GreatSchools.net base a good portion of their data on information gathered
by the Department of Education and the National Center for Education
Statistics, the government entity that collects and analyzes data related to
education.
SchoolMatters.com, a service of
Standard & Poor's, is more bare-bones, containing quick statistical
comparisons of schools. (S&P is a unit of McGraw-Hill Cos.) This site gets
its content from various sources, including state departments of education,
private research firms, the Census and National Public Education Finance
Survey. This is evidenced by lists, charts and pie graphs that would make
Ross Perot proud. I learned about where my alma mater high school got its
district revenue in 2005: 83% was local, 15% was state and 2% was federal.
But I couldn't find district financial information for more recent years on
the site.
All three sites base at least some
school-evaluation results on test scores, a point that some of their users
critique. Parents and teachers, alike, point out that testing doesn't always
paint an accurate picture of a school and can be skewed by various
unacknowledged factors, such as the number of students with disabilities.
Education.com's SchoolFinder feature is starting
with roughly 47,000 schools in 10 states: California, Texas, New York,
Florida, Illinois, Pennsylvania, Ohio, Michigan, New Jersey and Georgia. In
about two months, the site hopes to have data for all states, totaling about
60,000 public and charter schools. I was granted early access to
SchoolFinder, but only Michigan was totally finished during my testing.
SchoolFinder lets you narrow your results by type
(public or charter), student-to-teacher ratio, school size or Adequate
Yearly Progress (AYP), a measurement used to determine each school's annual
progress. Search results showed specific details on teachers that I didn't
see on the other sites, such as how many teachers were fully credentialed in
a particular school and the average years of experience held by a school's
teachers.
The rest of the Education.com site contains over
4,000 articles written by well-known education sources like the New York
University Child Study Center, Reading is Fundamental and the Autism Society
of America. It also contains a Web magazine and a rather involved
discussion-board community where members can ask questions of like-minded
parents and the site's experts, who respond with advice and suggestions of
articles that might be helpful.
Private schools aren't required to release test
scores, student or teacher statistics, so none of the sites had as much data
on private schools. However, GreatSchools.net at least offered basic results
for most private-school queries that I performed, such as a search for
Salesianum School in Delaware (where a friend of mine attended) that
returned the school's address, a list of the Advanced Placement exams it
offered from 2006 to 2007 and six rave reviews from parents and former
students.
GreatSchools.net makes it easy to compare schools,
even without knowing specific names. After finding a school, I was able to
easily compare that school with others in the geographic area or school
district -- using a chart with numerous results on one screen. After
entering my email address, I saved schools to My School List for later
reference.
I couldn't find each school's AYP listed on
GreatSchools.net, though these data were on Education.com and
SchoolMatters.com.
SchoolMatters.com doesn't provide articles, online
magazines or community forums. Instead, it spits out data -- and lots of it.
A search for "Philadelphia" returned 324 schools in a neat comparison chart
that could, with one click, be sorted by grade level, reading test scores,
math test scores or students per teacher. (The Julia R. Masterman Secondary
School had the best reading and math test scores in Philadelphia, according
to the site.)
SchoolMatters.com didn't have nearly as much user
feedback as Education.com or GreatSchools.net. But stats like a school's
student demographics, household income distribution and the district's
population age distribution were accessible thanks to colorful pie charts.
These three sites provide a good overall idea of
what certain schools can offer, though GreatSchools.net seems to have the
richest content in its school comparison section. Education.com excels as a
general education site and will be a comfort to parents in search of
reliable advice. Its newly added SchoolFinder, while it's in early stages
now, will only improve this resource for parents and students.
May 2, 2007 message from Carnegie President
[carnegiepresident@carnegiefoundation.org]
A different way to think about ... accountability
Alex McCormick's timely essay brings to our attention one of the most
intriguing paradoxes associated with high-stakes measurement of educational
outcomes. The more importance we place on going public with the results of
an assessment, the higher the likelihood that the assessment itself will
become corrupted, undermined and ultimately of limited value. Some policy
scholars refer to the phenomenon as a variant of "Campbell's Law," named for
the late Donald Campbell, an esteemed social psychologist and methodologist.
Campbell stated his principle in 1976: "The more any quantitative social
indicator is used for social decisionmaking, the more subject it will be to
corruption pressures and the more apt it will be to distort and corrupt the
social processes it is intended to monitor."
In the specific case of the Spellings Commission
report, Alex points out that the Secretary's insistence that information be
made public on the qualities of higher education institutions will place
ever higher stakes on the underlying measurements, and that very visibility
will attenuate their effectiveness as accountability indices. How are we to
balance the public's right to know with an institution's need for the most
reliable and valid information? Alex McCormick's analysis offers us another
way to think about the issue.
Carnegie has created a forum—Carnegie
Conversations—where you can engage publicly with the author and read and
respond to what others have to say about this article at
http://www.carnegiefoundation.org/perspectives/april2007 .
Or you may respond to Alex privately through
carnegiepresident@carnegiefoundation.org .
If you would like to unsubscribe to Carnegie
Perspectives, use the same address and merely type "unsubscribe" in the
subject line of your email to us.
We look forward to hearing from you.
Sincerely,
Lee S. Shulman
President The Carnegie Foundation for the Advancement of Teaching
Jensen Comment
The fact that an assessment provides incentives to cheat is not a reason to not
assess. The fact that we assign grades to students gives them incentives to
cheat. That does not justify ceasing to assess, because the assessment process
is in many instances the major incentive for a student to work harder and learn
more. The fact that business firms have to be audited and produce financial
statements provides incentives to cheat. That does not justify not holding
business firms accountable. Alex McCormick's analysis and Shulman's concurrence
is a bit one-sided in opposing the Spellings Commission recommendations.
Also see Full Disclosure to Consumers of Higher Education at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
School Assessment and College Admission Testing
July 25, 2006 query from Carol Flowers
[cflowers@OCC.CCCD.EDU]
I am looking for a study that I saw. I was unsure
if someone in this group had supplied the link, originally. It was a very
honest and extremely comprehensive evaluation of higher education. In it,
the
Higher Education Evaluation and Research Group was
constantly quoted. But, what organizations it is affiliated with, I am
unsure.
They commented on the lack of student academic
preparedness in our educational system today along with other challenging
areas that need to be addressed inorder to serve the population with which
we now deal.
If anyone remembers such a report, please forward
to me the url.
Thank You!
July 25, 2006 reply from Bob Jensen
Hi Carol,
I think the HEERG is
affiliated with the Chancellor's Office of the California Community
Colleges. It is primarily focused upon accountability and assessment of
these colleges.
HEERG ---
http://snipurl.com/HEERG
Articles related to your query include the
following:
Leopards in the Temple ---
http://www.insidehighered.com/views/2006/06/12/caesar
Accountability, Improvement and Money ---
http://www.insidehighered.com/views/2005/05/03/lombardi
Grade Inflation and Abdication ---
http://www.insidehighered.com/views/2005/06/03/lombardi
Students Read Less. Should We Care? ---
http://www.insidehighered.com/views/2005/08/23/lombardi
Missing the Mark: Graduation Rates and University
Performance ---
http://www.insidehighered.com/views/2005/02/14/lombardi2
Assessment of Learning Achievements of College Graduates
"Getting the Faculty On Board," by Freeman A. Hrabowski III, Inside Higher
Ed, June 23, 2006 ---
http://www.insidehighered.com/views/2006/06/23/hrabowski
But as assessment becomes a national imperative,
college and university leaders face a major challenge: Many of our faculty
colleagues are skeptical about the value of external mandates to measure
teaching and learning, especially when those outside the academy propose to
define the measures. Many faculty members do not accept the need for
accountability, but the assessment movement’s success will depend upon
faculty because they are responsible for curriculum, instruction and
research. All of us — policy makers, administrators and faculty — must work
together to develop language, strategies and practices that help us
appreciate one another and understand the compelling need for assessment —
and why it is in the best interest of faculty and students.
Why is assessment important? We know from the work
of researchers like Richard Hersh, Roger Benjamin, Mark Chun and George Kuh
that college enrollment will be increasing by more than 15 percent
nationally over the next 15 years (and in some states by as much as 50
percent). We also know that student retention rates are low, especially
among students of color and low-income students. Moreover, of every 10
children who start 9th grade, only seven finish high school, five start
college, and fewer than three complete postsecondary degrees. And there is a
20 percent gap in graduation rates between African Americans (42 percent)
and whites (62 percent). These numbers are of particular concern given the
rising higher education costs, the nation’s shifting demographics, and the
need to educate more citizens from all groups.
At present, we do not collect data on student
learning in a systematic fashion and rankings on colleges and universities
focus on input measures, rather than on student learning in the college
setting. Many people who have thought about this issue agree: We need to
focus on “value added” assessment as an approach to determine the extent to
which a university education helps students develop knowledge and skills.
This approach entails comparing what students know at the beginning of their
education and what they know upon graduating. Such assessment is especially
useful when large numbers of students are not doing well — it can and should
send a signal to faculty about the need to look carefully at the “big
picture” involving coursework, teaching, and the level of support provided
to students and faculty.
Many in the academy, however, continue to resist
systematic and mandated assessment in large part because of problems they
see with K-12 initiatives like No Child Left Behind — e.g., testing that
focuses only on what can be conveniently measured, unacceptable coaching by
teachers, and limiting what is taught to what is tested. Many academics
believe that what is most valuable in the college experience cannot be
measured during the college years because some of the most important effects
of a college education only become clearer some time after graduation.
Nevertheless, more institutions are beginning to understand that value-added
assessment can be useful in strengthening teaching and learning, and even
student retention and graduation rates.
It is encouraging that a number of institutions are
interested in implementing value-added assessment as an approach to evaluate
student progress over time and to see how they compare with other
institutions. Such strategies are more effective when faculty and staff
across the institution are involved. Examples of some best practices include
the following:
- Constantly talking with colleagues about both
the challenges and successful initiatives involving undergraduate
education.
- Replicating successful initiatives (best
practices from within and beyond the campus), in order to benefit as
many students as possible.
- Working continuously to improve learning based
on what is measured — from advising practices and curricular issues to
teaching strategies — and making changes based on what we learn from
those assessments.
- Creating accountability by ensuring that
individuals and groups take responsibility for different aspects of
student success.
- Recruiting and rewarding faculty who are
committed to successful student learning (including examining the
institutional reward structure).
- Taking the long view by focusing on
initiatives over extended periods of time — in order to integrate best
practices into the campus culture.
We in the academy need to think broadly about
assessment. Most important, are we preparing our students to succeed in a
world that will be dramatically different from the one we live in today?
Will they be able to think critically about the issues they will face,
working with people from all over the globe? It is understandable that
others, particularly outside the university, are asking how we demonstrate
that our students are prepared to handle these issues.
Assessment is becoming a national imperative, and
it requires us to listen to external groups and address the issues they are
raising. At the same time, we need to encourage and facilitate discussions
among our faculty — those most responsible for curriculum, instruction, and
research — to grapple with the questions of assessment and accountability.
We must work together to minimize the growing tension among groups — both
outside and inside the university — so that we appreciate and understand
different points of view and the compelling need for assessment.
Bob Jensen's threads on controversies in higher education are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
NCLB = No Child Left Behind Law
A September 2007 Thomas B. Fordham Institute report
found NCLB's assessment system "slipshod" and characterized by "standards that
are discrepant state to state, subject to subject, and grade to grade." For
example, third graders scoring at the sixth percentile on Colorado's state
reading test are rated proficient. In South Carolina the third grade proficiency
cut-off is the sixtieth percentile.
Peter Berger, "Some Will Be Left
Behind," The Irascible Professor, November 10, 2007 ---
http://irascibleprofessor.com/comments-11-10-07.htm
"This is Only a Test," by Peter Berger, The Irascible
Professor, December 5, 2005 ---
http://irascibleprofessor.com/comments-12-05-05.htm
Back in 2002 President Bush predicted "great
progress" once schools began administering the annual testing regime
mandated by No Child Left Behind. Secretary of Education Rod Paige echoed
the President's sentiments. According to Mr. Paige, anyone who opposed NCLB
testing was guilty of "dismissing certain children" as "unteachable."
Unfortunately for Mr. Paige, that same week The New
York Times documented "recent" scoring errors that had "affected millions of
students" in "at least twenty states." The Times report offered a pretty
good alternate reason for opposing NCLB testing. Actually, it offered
several million pretty good alternate reasons.
Here are a few more.
There's nothing wrong with assessing what students
have learned. It lets parents, colleges, and employers know how our kids are
doing, and it lets teachers know which areas need more teaching. That's why
I give quizzes and tests and one of the reasons my students write essays.
Of course, everybody who's been to school knows
that some teachers are tougher graders than others. Traditional standardized
testing, from the Iowa achievement battery to the SATs, was supposed to help
us gauge the value of one teacher's A compared to another's. It provided a
tool with which we could compare students from different schools.
This works fine as long as we recognize that all
tests have limitations. For example, for years my students took a nationwide
standardized social studies test that required them to identify the
President who gave us the New Deal. The problem was the seventh graders who
took the test hadn't studied U.S. history since the fifth grade, and FDR
usually isn't the focus of American history classes for ten-year-olds. He
also doesn't get mentioned in my eighth grade U.S. history class until May,
about a month after eighth graders took the test.
In other words, wrong answers about the New Deal
only meant we hadn't gotten there yet. That's not how it showed up in our
testing profile, though. When there aren't a lot of questions, getting one
wrong can make a surprisingly big difference in the statistical soup.
Multiply our FDR glitch by the thousands of
curricula assessed by nationwide testing. Then try pinpointing which schools
are succeeding and failing based on the scores those tests produce. That's
what No Child Left Behind pretends to do.
Testing fans will tell you that cutting edge
assessments have eliminated inconsistencies like my New Deal hiccup by
"aligning" the tests with new state of the art learning objectives and grade
level expectations. The trouble is these newly minted goals are often
hopelessly vague, arbitrarily narrow, or so unrealistic that they're pretty
meaningless. That's when they're not obvious and the same as they always
were.
New objectives also don't solve the timing problem.
For example, I don't teach poetry to my seventh grade English students.
That's because I know that their eighth grade English teacher does an
especially good job with it the following year, which means that by the time
they leave our school, they've learned about poetry. After all, does it
matter whether they learn to interpret metaphors when they're thirteen or
they're fourteen as long as they learn it?
Should we change our program, which matches our
staff's expertise, just to suit the test's arbitrary timing? If we don't,
our seventh graders might not make NCLB "adequate yearly progress." If we
do, our students likely won't learn as much.
Which should matter more?
Even if we could perfectly match curricula and test
questions, modern assessments would still have problems. That's because most
are scored according to guidelines called rubrics. Rubric scoring requires
hastily trained scorers, who typically aren't teachers or even college
graduates, to determine whether a student's essay "rambles" or "meanders."
Believe it or not, that choice represents a twenty-five percent variation in
the score. Or how about distinguishing between "appropriate sentence
patterns" and "effective sentence structure," or language that's "precise
and engaging" versus "fluent and original."
These are the flip-a-coin judgments at the heart of
most modern assessments. Remember that the next time you read about which
schools passed and which ones failed.
Unreliable scoring is one reason the General
Accountability Office condemned data "comparisons between states" as
"meaningless." It's why CTB/McGraw-Hill had to recall and rescore 120,000
Connecticut writing tests after the scores were released. It's why New York
officials discarded the scores from its 2003 Regents math exam. A 2001
Brookings Institution study found that "fifty to eighty percent of the
improvement in a school's average test scores from one year to the next was
temporary" and "had nothing to do with long-term changes in learning or
productivity." A senior RAND analyst warned that today's tests aren't
identifying "good schools" and "bad schools." Instead, "we're picking out
lucky and unlucky schools."
Students aren't the only victims of faulty scoring.
Last year the Educational Testing Service conceded that more than ten
percent of the candidates taking its 2003-2004 nationwide Praxis teacher
licensing exam incorrectly received failing scores, which resulted in many
of them not getting jobs. ETS attributed the errors to the "variability of
human grading."
The New England Common Assessment Program,
administered for NCLB purposes to all students in Vermont, Rhode Island, and
New Hampshire, offers a representative glimpse of the cutting edge. NECAP is
heir to all the standard problems with standardized test design, rubrics,
and dubiously qualified scorers.
NECAP security is tight. Tests are locked up, all
scrap paper is returned to headquarters for shredding, and testing scripts
and procedures are painstakingly uniform. Except on the mathematics exam,
each school gets to choose if its students can use calculators.
Whether or not you approve of calculators on math
tests, how can you talk with a straight face about a "standardized" math
assessment if some students get to use them and others don't? Still more
ridiculous, there's no box to check to show whether you used one or not, so
the scoring results don't even differentiate between students and schools
that did and didn't.
Finally, guess how NECAP officials are figuring out
students' scores. They're asking classroom teachers. Five weeks into the
year, before we've even handed out a report card to kids we've just met,
we're supposed to determine each student's "level of proficiency" on a
twelve point scale. Our ratings, which rest on distinguishing with allegedly
statistical accuracy between "extensive gaps," "gaps," and "minor gaps," are
a "critical piece" and "key part of the NECAP standard setting process."
Let's review. Because classroom teachers' grading
standards aren't consistent enough from one school to the next, we need a
standardized testing program. To score the standardized testing program,
every teacher has to estimate within eight percentage points how much their
students know so test officials can figure out what their scores are worth
and who passed and who failed.
If that makes sense to you, you've got a promising
future in education assessment. Unfortunately, our schools and students
don't.
"College Board Asks Group Not to Post Test Analysis," by Diana Jean
Schemol, The New York Times, December 4, 2004 --- http://www.nytimes.com/2004/12/04/education/04college.html?oref=login
The College Board, which owns the SAT college
entrance exam, is demanding that a nonprofit group critical of standardized
tests remove from its Web site data that breaks down scores by race, income
and sex.
The demand, in a letter to The National Center for
Fair and Open Testing, also known as FairTest, accuses the group of infringing
on the College Board's copyright.
"Unfortunately, your misuse overtly bypasses our
ownership and significantly impacts the perceptions of students, parents and
educators regarding the services we provide," the letter said.
The move by the College Board comes amid growing
criticism of the exams, with more and more colleges and universities raising
questions about their usefulness as a gauge of future performance and
discarding them as requirements for admission. The College Board is
overhauling parts of the exam and will be using a new version beginning in
March
FairTest has led opposition to the exams, and
releases the results to support its accusation of bias in the tests, a claim
rejected by test makers, who contend the scores reflect true disparities in
student achievement. FairTest posts the information in easily accessible
charts, and Robert A. Schaeffer, its spokesman, said they were the Web site's
most popular features.
In its response to the College Board letter, which
FairTest posted on its Web site on Tuesday, the group said it would neither
take down the data nor seek formal permission to use it. FairTest has been
publicly showing the data for nearly 20 years, Mr. Schaeffer said, until now
without objection from the testing company, which itself releases the data in
annual reports it posts on its Web site.
"You can't copyright numbers like that,"
Mr. Schaeffer said. "It's all about public education and making the
public aware of score gaps and the potential for bias in the exams."
Devereux Chatillon, a specialist on copyright law at
Sonnenschein, Nath & Rosenthal in New York, said case law supported
FairTest's position. "Facts are not copyrightable," Ms. Chatillon
said. In addition, she said, while the College Board may own the exam, the
real authors of the test results are those taking the exams.
Continued in article
2004 Senior Test Scores: ACT --- http://www.fairtest.org/nattest/ACT%20Scores%202004%20Chart.pdf
2004 Senior Test Scores: SAT --- http://www.fairtest.org/nattest/SAT%20Scoresn%202004%20Chart.pdf
Fair Test Reacts to the SAT Outcomes --- http://www.fairtest.org/univ/2004%20SAT%20Score%20Release.html
Fair Test Home --- http://www.fairtest.org/
Jensen Comment:
If there is to be a test that sets apart students that demonstrate higher
ability, motivation, and aptitude for college studies, how would it differ from
the present Princeton tests that have been designed and re-designed over and
over again? I cannot find any Fair Test models of what such a test would
look like. One would assume that by its very name Fair Test still agrees
that some test is necessary. However, the group's position seems to
be that no national test is feasible that will give the same means and standard
deviations for all groups (males, females, and race categories). Fair Test
advocates "assessments based on students' actual performances, not
one-shot, high-stakes exams."
Texas has such a Fair Test system in place for admission to any state
university. The President of the University of Texas, however, wants the
system to be modified since his top-rated institution is losing all of its
admission discretion and may soon be overwhelmed with more admissions than can
be seated in classrooms. My module on this issue, which was a special
feature on 60 Minutes from CBS, is at http://faculty.trinity.edu/rjensen/book04q4.htm#60Minutes
The problem with performance-based systems (such as the requirement that any
state university in Texas must accept any graduate in the top 10% of the
graduating class from any Texas high school) is that high schools in the U.S.
generally follow the same grading scale as Harvard University. Most
classes give over half the students A grades. Some teachers give A grades
just for attendance or effort apart from performance. This means that when
it comes to isolating the top 10% of each graduating class, we're talking in
terms of Epsilon differences. I hardly think Epsilon is a fair criterion
for admission to college. Also, as was pointed out on 60 Minutes,
students with 3.9 grade averages from some high schools tend to score much lower
than students with 3.0 grade averages from other high schools. This might
achieve better racial mix but hardly seems fair to the 3.0 student who was
unfortunate enough to live near a high school having a higher proportion of top
students. That was the theme
of the 60 Minutes CBS special contrasting a 3.9 low SAT student who got
into UT versus a 3.0 student who had a high SAT but was denied admission to UT.
What we really need is to put more resources into fair chances for those who
test poorly or happen to fall Epsilon below that hallowed 10% cut off. in a
performance-based system. This may entail more time and remedial effort on
the part of students before or after entering college.
Mount Holyoke Dumps the SAT
Mount Holyoke College, which decided in 2001 to make
the SAT optional, is finding very little difference in academic performance
between students who provided their test scores and those who didn't. The
women's liberal arts college is in the midst of one of the most extensive
studies to date about the impact of dropping the SAT -- a research project
financed with $290,000 from the Mellon Foundation. While the study isn't
complete, the college is releasing some preliminary results. So far, Mount
Holyoke has found that there is a difference of 0.1 point in the grade-point
average of those who do and do not submit SAT scores. That is equivalent to
approximately one letter grade in one course over a year of study. Those
results are encouraging to Mount Holyoke officials about their decision in 2001.
Scott Jaschik, "Not Missing the SAT," Inside Higher Ed March 9, 2005
--- http://www.insidehighered.com/insider/not_missing_the_sat
Jensen Comment:
These results differ from the experiences of the University of Texas system
where grades and test scores differ greatly between secondary
schools. Perhaps Mount Holyoke is not getting applications from
students in the poorer school districts. See http://faculty.trinity.edu/rjensen/book04q4.htm#60Minutes
Dangers of Self Assessment
My undergraduate students can’t accurately predict
their academic performance or skill levels. Earlier in the semester, a writing
assignment on study styles revealed that 14 percent of my undergraduate English
composition students considered themselves “overachievers.” Not one of those
students was receiving an A in my course by midterm. Fifty percent were
receiving a C, another third was receiving B’s and the remainder had earned
failing grades by midterm. One student wrote, “overachievers like myself began a
long time ago.” She received a 70 percent on her first paper and a low C at
midterm.
Shari Wilson, "Ignorant of Their
Ignorance," Inside Higher Ed, November 16, 2006 ---
http://www.insidehighered.com/views/2006/11/16/wilson
Jensen comment
This does not bode well for self assessment.
Do middle-school students understand how well they actually learn?
Given national mandates to ‘leave no child behind,’
grade-school students are expected to learn an enormous amount of course
material in a limited amount of time. “Students have too much to learn, so it’s
important they learn efficiently,” says Dr. John Dunlosky, Kent State professor
of psychology and associate editor of Journal of Experimental Psychology:
Learning, Memory and Cognition. Today, students are expected to understand and
remember difficult concepts relevant to state achievement tests. However, a
major challenge is the student’s ability to judge his own learning. “Students
are extremely over confident about what they’re learning,” says Dunlosky.
Dunlosky and his colleague, Dr. Katherine Rawson, Kent State assistant professor
of psychology, study metacomprehension, or the ability to judge your own
comprehension and learning of text materials. Funded by the U.S. Department of
Education, their research primarily focuses on fifth, seventh and eighth graders
as well as college-aged students, and how improving metacomprehension can, in
turn, improve students’ self-regulated learning.
PhysOrg, November 26, 2007 ---
http://physorg.com/news115318315.html
Competency-Based Assessment
Question
What are two early adopters of competency-based education in distance education
courses?
Undergraduate Program Answer: Western Governors University (WGU)
Graduate Program Answer: Chartered Accountancy School of Business (CASB)
in Western Canada
See
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
Question
How do the University of Chicago (in the 1900s), and the 21st Century University
of Wisconsin, University of Akron, and Southern New Hampshire University
competency-based differ from the WGU and CASB programs?
Answer
The WGU and CASB only administer competency-based testing for students
enrolled in distance education courses.
The other universities mentioned provide(d) transcript credits without
enrolling in courses.
"Competency-Based Education Goes Mainstream in Wisconsin," by Scott
Carlson, Chronicle of Higher Education, September 30, 2013 ---
http://chronicle.com/article/Competency-Based-Education/141871/?cid=wc&utm_source=wc&utm_medium=en
Twenty years ago, Aaron Apel headed off to the
University of Wisconsin at Platteville, where he spent too little time
studying and too much time goofing off. He left the university, eventually
earning an associate degree in information technology at a community
college.
Now, as a longtime staff member in the registrar's
office at Wisconsin's Madison campus, he has advanced as far as his
education will let him. "I have aspirations to climb the ladder in
administration, but the opportunity isn't there without a four-year degree,"
he says.
Spending months in a classroom is out of the
question: In addition to his full-time job, he helps his wife run an
accounting business, shuttles three kids to activities, and oversees an
amateur volleyball league. Now he may have another option. Later this year
Wisconsin's extension system will start a competency-based learning program,
called the Flexible Option, in which students with professional experience
and training in certain skills might be able to test out of whole courses on
their way to getting a degree.
Competency-based learning is already famously used
by private institutions like Southern New Hampshire University and Western
Governors University, but Wisconsin will be one of the first major public
universities to take on this new, controversial form of granting degrees.
Among the system's campuses, Milwaukee was first to announce bachelor's
degrees in nursing, diagnostic imaging, and information science and
technology, along with a certificate in professional and business
communication. UW Colleges, made up of the system's two-year institutions,
is developing liberal-arts-oriented associate degrees. The Flex Option, as
it's often called, may cost the Wisconsin system $35-million over the next
few years, with half of that recovered through tuition. The system is
starting with a three-month, all-you-can-learn term for $2,250.
If done right, the Flex Option could help a
significant number of adults acquire marketable skills and cross the college
finish line—an important goal in Wisconsin, which lags behind neighboring
states in percentage of adults with college diplomas. There are some 800,000
people in the state who have some college credits but no degree—among them
Wisconsin Gov. Scott Walker, who dropped out of Marquette University. He had
pushed the university system to set up the Flex Option early last year, when
he was considering inviting Western Governors to the state to close a
statewide skills gap in high-demand fields like health care, information
technology, and advanced manufacturing.
"Students in general are learning in very different
ways," the governor, a Republican, says in an interview. The state's
population of adults with some college but no degree constitutes "a
target-rich environment for us to find the new engineers, health-care
professionals, and IT experts that we need to fill these jobs, so we don't
have to recruit them from elsewhere and we don't have to wait for years for
undergraduates."
But if it's designed poorly, the program will
confirm perceptions held by some faculty members, who already thought that
the governor's policies were hostile to higher education. They worry that
the Flex Option will turn the University of Wisconsin into a kind of diploma
mill or suck resources from a system that is already financially pressured.
Faculty at the Green Bay campus passed a resolution to express "doubts that
the Flexible degree program will meet the academic standards of a university
education."
"It's an intriguing idea, but I think the questions
that need to be asked are what are the serious limitations of it," says Eric
Kraemer, a philosophy professor at the La Crosse campus, where faculty
members were also highly skeptical of the Flex Option. Mr. Kraemer wonders
whether there actually is a significant group of Wisconsin adults who have
the initiative and ability to test out of big portions of degree programs.
And, particularly in a squishier subject area like the humanities, he
wonders whether testing can adequately evaluate what a traditional student
would glean through time and effort spent in a course. "I have serious
doubts about the effectiveness of simply doing a competency test to
determine whether someone can actually think on their feet."
Certainly, there are a lot of details to be worked
out, even as the Flexible Option prepares to enroll its first students. Some
of the challenges are technical or logistical: Wisconsin's extension program
will have to spend millions to create a student-information system flexible
enough to work in a new environment, where student progress is tracked not
by course time but competencies, and where instruction and assessment are
decoupled.
Continued in article
"Innovations in Higher Education? Hah! College leaders need to move
beyond talking about transformation before it's too late," by Ann Kirschner,
Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
. . .
(Conclusion)
Some of the most interesting work begins in the academy but grows beyond it.
"Scale" is not an academic value—but it should be. Most measures of prestige
in higher education are based on exclusivity; the more prestigious the
college, the larger the percentage of applicants it turns away. Consider the
nonprofit Khan Academy, with its library of more than 3,000 education videos
and materials, where I finally learned just a little about calculus. In the
last 18 months, Khan had 41 million visits in the United States alone. It is
using the vast data from that audience to improve its platform and grow
still larger. TED, the nonprofit devoted to spreading ideas, just launched
TED-Ed, which uses university faculty from around the world to create
compelling videos on everything from "How Vast Is the Universe?" to "How
Pandemics Spread." Call it Khan Academy for grown-ups. The Stanford
University professor Sebastian Thrun's free course in artificial
intelligence drew 160,000 students in more than 190 countries. No surprise,
the venture capitalists have come a-calling, and they are backing
educational startups like Udemy and Udacity.
All of those are signposts to a future where
competency-based credentials may someday compete with a degree.
At this point, if you are affiliated with an Ivy
League institution, you'll be tempted to guffaw, harrumph, and otherwise
dismiss the idea that anyone would ever abandon your institution for such
ridiculous new pathways to learning. You're probably right. Most
institutions are not so lucky. How long will it take for change to affect
higher education in major ways? Just my crystal ball, but I would expect
that institutions without significant endowments will be forced to change by
2020. By 2025, the places left untouched will be few and far between.
Here's the saddest fact of all: It is those leading
private institutions that should be using their endowments and moral
authority to invest in new solutions and to proselytize for experimentation
and change, motivated not by survival but by the privilege of securing the
future of American higher education.
The stakes are high. "So let me put colleges and
universities on notice," President Obama said in his recent State of the
Union address. "If you can't stop tuition from going up, the funding you get
from taxpayers will go down." Because of the academy's inability to police
itself and improve graduation rates, and because student debt is an
expedient political issue, the Obama administration recently threatened to
tie colleges' eligibility for campus-based aid programs to institutions'
success in improving affordability and value for students.
Whether the president's threat is fair or not, it
will not transform higher education. Change only happens on the ground.
Despite all the reasons to be gloomy, however, there is room for optimism.
The American university, the place where new ideas are born and lives are
transformed, will eventually focus that lens of innovation upon itself. It's
just a matter of time.
Jensen Comment
This a long and important article for all educators to carefully read. Onsite
colleges have always served many purposes, but one purpose they never served is
to be knowledge fueling stations where students go to fill their tanks. At best
colleges put a shot glass of fuel in a tanks with unknown capacities.
Students go to an onsite college for many reasons other than to put fuel in
their knowledge tanks. The go to live and work in relatively safe transitional
environments between home and the mean streets. They go to mature, socialize, to
mate, drink, laugh, leap over hurdles societies place in front of career paths,
etc. The problem in the United States is that college onsite living and
education have become relatively expensive luxuries. Students must now make more
painful decisions as to how much to impoverish their parents and how deeply go
into debt.
I have a granddaughter 22 years old majoring in pharmacy (six year program).
She will pay off her student loans before she's 50 years old if she's lucky.
Some older students who've not been able to pay off their loans are becoming
worried that the Social Security Administration will garnish their retirement
Social Security monthly payments for unpaid student loans.
We've always known that colleges are not necessary places for learning and
scholarship. Until 43 years ago (when the Internet was born) private and public
libraries were pretty darn necessary for scholarship. Now the Internet provides
access to most known knowledge of the world. But becoming a scholar on the
Internet is relatively inefficient and overwhelming without the aid of
distillers of knowledge, which is where onsite and online college courses can
greatly add to efficiency of learning.
But college courses can be terribly disappointing as distillers of knowledge.
For one thing, grade inflation disgracefully watered down the amount of real
fuel in that shot glass of knowledge provided in a college course ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Grades rather than learning became the tickets to careers and graduate schools,
thereby, leading to street-smart cheating taking over for real learning
perspiration ---
http://faculty.trinity.edu/rjensen/Plagiarism.htm
When 80% of Harvard's graduating class graduates cum laude, we no
longer identify which graduates are were the best scholars in their class.
Soon those graduates from Harvard, Florida A&M University, Capella
University, and those who learned on their own from free courses, video
lectures, and course materials on the Web will all face some sort of common
examinations (written and oral) of their competencies in specialties.
Competency testing will be the great leveler much like licensure examinations
such as the Bar Exam, the CPA exam, the CFA exam, etc. are graded on the basis
of what you know rather than where you learned what you know. It won't really
matter whether you paid a fortune to learn Bessel Functions onsite at MIT or for
free from the MITx online certificate program ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
If you are an educator or are becoming an educator, please read:
"Innovations in Higher Education? Hah! College leaders need to move beyond
talking about transformation before it's too late," by Ann Kirschner,
Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
Competency-Based Assessment ---
http://faculty.trinity.edu/rjensen/competency.htm
There are a few really noteworthy competency-based distance education
programs including Western Governors University (WGU) and the Chartered
Accountancy School of Business (CASB) in Canada. But these compentency-based
programs typically have assigned instructors and bear the costs of those
instructors. The instructors, however, do not assign grades to students.
It appears that the Southern New Hampshire University (a private institution)
is taking competency-based distance education to a new level by eliminating the
instructors. It should be noted that SNHU has both an onsite campus and online
degree programs.
"Online Education Is Everywhere. What’s the Next Big Thing?" by Marc
Parry, Chronicle of Higher Education, August 31, 2011 ---
http://chronicle.com/blogs/wiredcampus/online-education-is-everywhere-whats-the-next-big-thing/32898?sid=wc&utm_source=wc&utm_medium=en
. . .
The vision is that students could sign up for
self-paced online programs with no conventional instructors. They could work
at their own speeds through engaging online content that offers built-in
assessments, allowing them to determine when they are ready to move on. They
could get help through networks of peers who are working on the same
courses; online discussions could be monitored by subject experts. When
they’re ready, students could complete a proctored assessment, perhaps at a
local high school, or perhaps online. The university’s staff could then
grade the assessment and assign credit.
And the education could be far cheaper, because
there would be no expensive instructor and students could rely on free, open
educational resources rather than expensive textbooks. Costs to the student
might include the assessment and the credits.
“The whole model hinges on excellent assessment, a
rock-solid confidence that the student has mastered the student-learning
outcomes,” the memo says. “If we know with certainty that they have, we
should no longer care if they raced through the course or took 18 months, or
if they worked on their courses with the support of a local church
organization or community center or on their own. The game-changing idea
here is that when we have assessment right, we should not care how a student
achieves learning. We can blow up the delivery models and be free to try
anything that shows itself to work.”
Continued in article
Jensen Comment
In its early history, the University of Chicago had competency-based programs
where grades were assigned solely on the basis of scores on final examinations.
Students did not have to attend class.
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/competency.htm
Bob Jensen's threads on distance education alternatives are at
http://faculty.trinity.edu/rjensen/Crossborder.htm
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
I should point out that this is very similar to the AAA's Innovation in
Accounting Education Award Winning BAM Pedagogy commenced at the University of
Virginia (but there were instructors who did not teach) ---
http://faculty.trinity.edu/rjensen/265wp.htm
"College Degree, No Class Time Required University of Wisconsin to Offer a
Bachelor's to Students Who Take Online Competency Tests About What They Know,"
by Caroline Porter, The Wall Street Journal, January 24, 2013 --- "
http://online.wsj.com/article/SB10001424127887323301104578255992379228564.html
Thank you Ramesh Fernando for the heads up.
David Lando plans to start working toward a diploma
from the University of Wisconsin this fall, but he doesn't intend to set
foot on campus or even take a single online course offered by the school's
well-regarded faculty.
Instead, he will sit through hours of testing at
his home computer in Milwaukee under a new program that promises to award a
bachelor's degree based on knowledge—not just class time or credits.
"I have all kinds of credits all over God's green
earth, but I'm using this to finish it all off," said the 41-year-old
computer consultant, who has an associate degree in information technology
but never finished his bachelor's in psychology.
Colleges and universities are rushing to offer free
online classes known as "massive open online courses," or MOOCs. But so far,
no one has figured out a way to stitch these classes together into a
bachelor's degree.
Now, educators in Wisconsin are offering a possible
solution by decoupling the learning part of education from student
assessment and degree-granting.
Wisconsin officials tout the UW Flexible Option as
the first to offer multiple, competency-based bachelor's degrees from a
public university system. Officials encourage students to complete their
education independently through online courses, which have grown in
popularity through efforts by companies such as Coursera, edX and Udacity.
No classroom time is required under the Wisconsin
program except for clinical or practicum work for certain degrees.
Elsewhere, some schools offer competency-based
credits or associate degrees in areas such as nursing and business, while
Northern Arizona University plans a similar program that would offer
bachelor's degrees for a flat fee, said spokesman Eric Dieterle. But no
other state system is offering competency-based bachelor's degrees on a
systemwide basis.
Wisconsin's Flexible Option program is "quite
visionary," said Molly Corbett Broad, president of the American Council on
Education, an education policy and lobbying group that represents some 1,800
accredited colleges and universities.
In Wisconsin, officials say that about 20% of adult
residents have some college credits but lack a degree. Given that a growing
number of jobs require a degree, the new program appeals to potential
students who lack the time or resources to go back to school full time.
"It is a big new idea in a system like ours, and it
is part of the way the ground is shifting under us in higher education,"
said Kevin Reilly, president of the University of Wisconsin System, which
runs the state's 26 public-university campuses.
Under the Flexible Option, assessment tests and
related online courses are being written by faculty who normally teach the
related subject-area classes, Mr. Reilly said.
Officials plan to launch the full program this
fall, with bachelor's degrees in subjects including information technology
and diagnostic imaging, plus master's and bachelor's degrees for registered
nurses. Faculty are working on writing those tests now.
The charges for the tests and related online
courses haven't been set. But university officials said the Flexible Option
should be "significantly less expensive" than full-time resident tuition,
which averages about $6,900 a year at Wisconsin's four-year campuses.
The Wisconsin system isn't focusing on the
potential cost savings the program may offer it but instead "the university
and the state are doing this to strengthen the state work force," said
university spokesman David Giroux.
Siva Vaidhyanathan, a media-studies professor at
the University of Virginia who has written about the future of universities,
called the program a "worthy experiment" but warned that school officials
"need to make sure degree plans are not watered down."
Some faculty at the school echoed the concern,
since the degree will be indistinguishable from those issued by the
University of Wisconsin the traditional way. "There has got to be very
rigorous documentation that it lives up to the quality of that name," said
Mark Cook, an animal-sciences professor and chairman of the university
committee for the faculty senate at the Madison campus.
Wisconsin Gov. Scott Walker has championed the
idea, in part because he left college in his senior year for a job
opportunity and never finished his degree. He said he hoped to use the
Flexible Degree option himself.
"I think it is one more way to get your degree. I
don't see it as replacing things," Mr. Walker said
Continued in article
Jensen Comment
If competency based learning is to be offered in this manner, I think the
pretense that this is equivalent to a traditional undergraduate degree should be
dropped. An undergraduate diploma traditionally maps to a curriculum that
includes some courses that just cannot be examined with competency-based testing
proposed in this article. This includes speech courses where students must stand
in front of audiences to perform and be evaluated. This includes case courses
where the student's oral contributions to oral discussions of a case,
discussions that take on serendipitous tracks and student interactions.
Science laboratories and many other courses entail use of onsite equipment,
chemicals, etc. Some physical education courses entail individual and team
performances. Music courses often entail performances on musical instruments or
singing before critics. Education courses often entail live teaching and other
interactions with K-12 students.
In between we have online universities that still make students take courses
and interact with instructors and other students by email, chat rooms, etc. A
few like Western Governors University even have course grades based on
competency-based testing. But WGU only offers certain majors that do not entail
onsite laboratory experiences and other onsite experiences. In the 19th Century
the University of Chicago allowed students to take final examinations in some
courses without attending any classes. But this did not apply to all types
of courses available on campus.
The day will probably come where there are no undergraduate or graduate
degrees. Students will instead have transcript records of their graded
performances onsite and online. But that day has not yet arrived. The above
University of Wisconsin alternative to obtaining an undergraduate diploma must
be severely limited in terms of the total curriculum available onsite at state
university campuses in Wisconsin.
The above University of Wisconsin alternative to obtaining an online diploma
cuts out important parts of online learning in a course where students
frequently interact with instructors and other students enrolled in class.
Bob Jensen's threads on the dark side of education technology ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
Update on the Roaring Online Nonprofit Western Governors University (WGU)
founded in 1997 by the governors of 19 states
A competency-based university where instructors don't assign the grades ---
grades are based upon competency testing
WGU does not admit foreign students
WGU now has over 30,000 students from sponsoring states for this nonprofit,
private university
Western Governors University (WGU) ---
http://en.wikipedia.org/wiki/WGU
Competency-Based Learning ---
http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
The article below is about WGU-Texas which was "founded" in 2011 when Texas
joined the WGU system
"Reflections on the First Year of a New-Model University," by Mark David
Milliron, Chronicle of Higher Education, October 1, 2012 ---
http://chronicle.com/article/Reflections-on-the-First-Year/134670/?cid=wc&utm_source=wc&utm_medium=en
Western Governors University Texas, where I am
chancellor, is not an easy institution to describe to your mother—or even
your hip sister. It just doesn't fit the profile of most traditional
universities, even the newer for-profit and online ones. It brings the work
of a national, online, nonprofit university into a state, and it embraces a
competency-based education model that is rarely found on an institutionwide
level.
Even for seasoned educators, WGU Texas feels
different. And in a year that has seen flat or declining enrollments at many
traditional colleges, reports critical of for-profit institutions, and
continuing debate over the perils and promise of online learning, our story,
and our growth, has been unique. As we hit our one-year anniversary, it's
worth taking a few moments to reflect on the ups, downs, challenges, and
champions of this newest state model. I'd offer three key reflections on
lessons we've learned:
Building a strong foundation.
Western Governors was founded as a private, multistate online university 15
years ago by governors of Western states. Texas is only the third state
model within the system, following WGU Indiana and WGU Washington. Before
our opening, leaders of Western Governors took time to make sure the idea of
this state university made sense for Texas. The intent was to add
high-quality, affordable capacity to the state's higher-education system,
particularly for adult learners, and to localize it for Texans and their
employers.
This outpost was poised to "go big" in one of the
biggest of states, offering more than 50 bachelor's and master's degrees in
high-demand fields in business, education, information technology, and
health professions. WGU's online-learning model allows students to progress
by demonstrating what they know and can do rather than by logging time in
class accumulating credit hours.
In meetings across the state, the idea of WGU Texas
gained the support of the state's political, legislative, and
higher-education leaders, as well as the Texas Workforce Commission and the
Texas Association of Community Colleges. Rushing to roll out was not the
goal; entering the education ecosystem with solid support of the model was.
I came on board as chancellor in December 2011.
Having served on WGU's Board of Trustees for six years, I knew the model,
and having graduated from and worked for the University of Texas at Austin,
I knew Texas.
In the past six months, we have hired key staff and
faculty, formed a state advisory board, opened a main office and training
center in downtown Austin, launched our first wave of student outreach,
begun working with employers in different metro regions, and started
connecting online and on the ground with students. After absorbing WGU's
1,600 existing Texas students, WGU Texas grew by more than 60 percent in
this first year, entering August 2012 with more than 3,000 students.
In about eight weeks, we'll hold our first
commencement in Austin, celebrating the graduation of more than 400
students. We're moving quickly now, but it's the firm foundation of
outreach, support, and systems that served us well as we took on the next
two challenges:
Confronting conflation. WGU Texas
is laser-focused on a student population that is typically underserved. We
see ourselves as a good fit for adult learners who need an affordable,
quality, and flexible learning model, particularly working students who want
to attend full time. We are especially focused on the more than three
million Texans who have some college and no credential—students like Jason
Franklin, a striving adult learner in a high-demand IT field who had gone as
far as he could in his career without a degree. He earned a bachelor's and a
master's degree through Western Governors, and is now working on a master's
degree from WGU Texas.
We'd like to help these students reach their goals
and get on a solid career and lifelong-learning path.
However, in offering a new model like ours, you
quickly find the conflation problem a challenge. Some assume that you're
trying to compete for the fresh-from-high-school graduates who want a campus
experience. Others assume that because you're online, you must be a
for-profit university. Still others put all online education programs in the
same bucket, not distinguishing at all between a traditional model online
and a deeply personalized, competency-based learning model.
Fighting conflation by clearly differentiating and
properly positioning our university has been essential. We've had to be
clear—and to repeat often—that our approach is designed for adult learners
who have some college and work experience. We're absolutely OK with telling
prospective students, partner colleges, and state-policy leaders that for
18- to 20-year-olds looking to embark on their first college experience, we
are probably not the right fit. In fact, first-time freshmen make up less
than 5 percent of our student population.
The for-profit conflation has been even more
interesting. Many people assume that any online university is for-profit. We
are not. And even when we assure them that our nonprofit status keeps us
deeply committed to low tuition—we have a flat-rate, six-month-term tuition
averaging less than $3,000 for full-time students, which our national parent
WGU has not raised for four years—they have a hard time getting their minds
around it.
Others are sure we are nothing more than an online
version of the traditional model, relying entirely on adjunct faculty. When
we explain our history, learning model, and reliance on full-time faculty
members who specialize in either mentoring or subject matter, it takes some
time. But once people embrace the idea of a personal faculty mentor who
takes a student from first contact to crossing the graduation stage, they
warm quickly to the model.
Synching with the state's needs.
While forming the foundation and fighting conflation are important, I'd say
the key to WGU's state-model successes is the commitment to synching with
the economic, educational, and student ecosystem of the state.
On the economic level, we've been able to work
directly with employers eager to support our university, advance our
competency-centered model, and hire our graduates. Educationally we have
been fortunate to have smart and strategic partners that have guided our
entry into the state. For example, our Finish to Go Further transfer
program, in partnership with the Texas community-college association,
motivates students to complete their associate degrees before transferring.
This strategy supports the goal of the Texas Higher Education Coordinating
Board of significantly improving postsecondary access and success in Texas.
Continued in article
Bob Jensen's threads on assessment (including competency-based assessment)
---
http://faculty.trinity.edu/rjensen/Assess.htm
Jensen Comment
WGU is neither a traditional university nor a MOOC. It started as an experiment
to deliver a quality education without having the 19 states have to build and/or
maintain physical campuses to deliver college education to more students.
Admittedly, one of the main incentives was to expand learning opportunities
without paying for the enormous costs of building and maintaining campuses. WGU
was mostly an outreach program for non-traditional students who for one reason
or another are unable to attend onsite campuses. But the primary goal of WGU was
not and still is not confined to adult education.
WGU is not intended to take over onsite campus education alternatives. The
founders of WGU are well aware that living and learning on an onsite campus
brings many important components to education and maturation and socialization
that WGU cannot offer online. For example, young students on campus enter a new
phase of life living outside the homes and daily oversight of their parents. But
the transition is less abrupt than living on the mean streets of real life.
Students meet face-to-face on campus and are highly likely to become married or
live with students they are attracted to on campus. Campus students can
participate in athletics, music performances, theatre performances, dorm life,
chapel life, etc.
But WGU is not a MOOC where 100,000 anonymous students may be taking an
online course. Instead, WGU courses are relatively small with intimate
communications 24/7 with instructors and other students in most of the courses.
In many ways the learning communications may be much closer online in WGU than
on campus at the University of Texas where classrooms often hold hundreds of
students taking a course.
There are some types of learning that can take place in live classrooms
that are almost impossible online.
For example, an onsite case analysis class (Harvard style) takes on a life of
its own that case instructors cannot anticipate before class. Students are
forced to speak out in front of other students. A student's unexpected idea may
change the direction of the entire case discussion for the remainder of the
class. I cannot imagine teaching many Harvard Business School cases online even
though there are ways to draw out innovative ideas and discussions online.
Physical presence is part and parcel to teaching many HBS cases.
Competency-based grading has advantages and disadvantages.
Competency-based grading removes incentives to brown nose instructors for better
grades. It's unforgiving for lazy and unmotivated students. But these advantages
can also be disadvantages. Some students become more motivated by hoping that
their instructors will reward effort as well as performance. At unexpected
points in life those rewards for effort may come at critical times just before a
student is apt to give up and look for a full time McJob.
Some students are apt to become extremely bored learning about Shakespeare or
Mozart. But in attempting to please instructors with added effort, the students
may actually discover at some unexpected point something wonderful about
Shakespeare or Mozart. Mathematics in particular is one of those subjects that
can be a complete turn off until suddenly a light clicks and student discovers
that math is not only interesting --- math can be easier once you hit a key
point in the mathematics learning process. This definitely happened with me, and
the light did not shine for me until I started a doctoral program. Quite
suddenly I loved mathematics and made it the central component of my five years
of full-time doctoral studies at Stanford University.
Thus WGU and the University of Texas should not be considered competitors.
They are different alternatives that have some of the same goals (such as
competency in learning content) and some different goals (such as living with
other students and participating in extracurricular activities).
I wish WGU well and hope it thrives alongside the traditional state-supported
campuses. WGU in some ways was a precursor to MOOC education, but WGU is not a
MOOC in the sense that classes are small and can be highly interactive with
other students and with instructor. In a MOOC, students have to be more
motivated to learn on their own and master the material without much outside
help from other students or instructors.
There are many ways to teach and many ways to learn. WGU found its niche.
There's no one-size-fits-all to living and learning.
Bob Jensen's threads on higher education controversies ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Western Governors University ---
http://en.wikipedia.org/wiki/Western_Governors_University
Instructors do not assign the grades in this successful "competency-based
testing university
A President Brings a Revolutionary University to Prominence," by
Goldie Blumenstyk, Chronicle of Higher Education, February 26, 2012 ---
http://chronicle.com/article/A-President-Brings-a/130915/?sid=wc&utm_source=wc&utm_medium=en
Western Governors University,
first conceived in 1995, embodied an idea that was
ahead of its time. And early in its life, that showed.
Traditional accreditors resisted its model: an all-online, competency-based
institution. Experts scoffed at its grandiose promises to reshape higher
education. Students, unmoved by its founders' ambitious early enrollment
projections, mostly stayed away.
Yet a Utah technology entrepreneur named Robert W.
Mendenhall, who had been asked to kick-start the venture a few years into
its existence, says he never doubted. "It took me about 30 seconds to decide
I would do it," says Mr. Mendenhall, WGU's president since 1999. "I was
always confident that we'd pull it off. The idea made so much sense."
Today the unusual institution has drawn growing
notice from national mainstream news media and at meetings on college
affordability by both the U.S. Senate and President Obama. It has a growing
student body of more than 25,000 students.
Mr. Mendenhall, now 57, came to WGU when it had no
students and no degrees. "The vision of it was just coagulating," recalls
Michael O. Leavitt, the former Utah governor who was instrumental in the
institution's founding and in Mr. Mendenhall's hiring.
With his know-how for building start-up businesses,
a practical willingness to shed time-consuming and unpromising components
(like a plan to run an online catalog of online courses from other
institutions), and what Mr. Leavitt calls a determined "sense of mission"
for low-cost, competency-based higher education, Mr. Mendenhall kept the
nonprofit institution moving.
Internally, he was an "in your face" presence, a
colleague says, while externally, thanks in no small part to the political
backing of 19 governors, he pulled the strings that would eventually land
WGU millions in federal grants to develop its online programs and its
distinguishing proficiency exams by which students progress toward a degree,
and millions more from the Lumina Foundation to create what would become its
turning point, a teachers' college.
Continued in article
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
Competency-Based College Credit ---
http://faculty.trinity.edu/rjensen/Assess.htm#ECA
"Online Education Is Everywhere. What’s the Next Big Thing?" by Marc
Parry, Chronicle of Higher Education, August 31, 2011 ---
http://chronicle.com/blogs/wiredcampus/online-education-is-everywhere-whats-the-next-big-thing/32898?sid=wc&utm_source=wc&utm_medium=en
Western Governors University (a nonprofit, competency- based online
university) ---
http://en.wikipedia.org/wiki/Western_Governors_University
Also see http://www.wgu.edu/home2
New Charter University (a for-profit, self-paced, competency-based
online university) ---
http://en.wikipedia.org/wiki/New_Charter_University
"No Financial Aid, No Problem. For-Profit University Sets $199-a-Month
Tuition for Online Courses," by Marc Parry, Chronicle of Higher Education,
March 29, 2012 ---
http://chronicle.com/article/No-Financial-Aid-No-Problem/131329/?sid=wc&utm_source=wc&utm_medium=en
It's a higher-education puzzle: Students are
flocking to Western Governors University, driving growth of 30 to 40 percent
each year. You might expect that competitors would be clamoring to copy the
nonprofit online institution's model, which focuses on whether students can
show "competencies" rather than on counting how much time they've spent in
class.
So why haven't they?
Two reasons, says the education entrepreneur Gene
Wade. One, financial-aid regulatory problems that arise with self-paced
models that aren't based on seat time. And two, opposition to how Western
Governors changes the role of professor, chopping it into "course mentors"
who help students master material, and graders who evaluate homework but do
no teaching.
Mr. Wade hopes to clear those obstacles with a
start-up company, UniversityNow, that borrows ideas from Western Governors
while offering fresh twists on the model. One is cost. The for-profit's new
venture—New Charter University, led by Sal Monaco, a former Western
Governors provost—sidesteps the loan system by setting tuition so cheap that
most students shouldn't need to borrow. The price: $796 per semester, or
$199 a month, for as many classes as they can finish.
"This is not buying a house," says Mr. Wade,
co-founder and chief executive of UniversityNow. "This is like, do I want to
get cable?"
Another novelty: New Charter offers a
try-it-before-you-buy-it platform that mimics the "freemium" model of many
consumer Web services. Anyone can create an account and start working
through its self-paced online courses free of charge. Their progress gets
recorded. If they decide to pay up and enroll, they get access to an adviser
(who helps navigate the university) and course specialists (who can discuss
the material). They also get to take proctored online tests for course
credit.
The project is the latest in a series of
experiments that use technology to rethink the economics of higher
education, from the $99-a-month introductory courses of StraighterLine to
the huge free courses provided through Stanford and MIT.
For years, some analysts have argued that ready
access to Pell Grants and federal loans actually props up colleges prices,
notes Michael B. Horn, executive director for education at Innosight
Institute, a think tank focused on innovation. That's because institutions
have little incentive to charge anything beneath the floor set by available
financial aid.
"Gene and his team are basically saying, the heck
with that—we're going to go around it. We think people can afford it if we
offer it at this low a price," Mr. Horn says. "That could be revolutionary."
Yet the project faces tall hurdles: Will employers
value these degrees? Will students sign on? And, with a university that
lacks regional accreditation right now—New Charter is nationally accredited
by the Distance Education and Training Council, and is considering seeking
regional accreditation—will students be able to transfer its credits?
Mr. Wade banks on appealing to working adults who
crave easier access to education. When asked who he views as the
competition, his reply is "the line out the door at community college." In
California, where Mr. Wade is based, nearly 140,000 first-time students at
two-year institutions couldn't get into any courses at all during the
previous academic year, according to a recent Los Angeles Times editorial
about the impact of state budget cuts.
Mr. Wade himself benefited from a first-class
education, despite being raised without much money in a housing project in a
tough section of Boston. Growing up there, during an era when the city
underwent forced busing to integrate its schools, felt like watching a
"train wreck" but walking away unscathed. He attended high school at the
prestigious Boston Latin School. With assistance from Project REACH, a
program to help Boston minorities succeed in higher education, he went to
Morehouse College. From there his path included a J.D. from Harvard Law, an
M.B.A. from Wharton, and a career as an education entrepreneur.
The 42-year-old founded two earlier companies:
LearnNow, a charter-school-management outfit that was sold to Edison
Schools, and Platform Learning, a tutoring firm that served low-income
students. So far, he's raised about $8 million from investors for
UniversityNow, whose New Charter subsidiary is a rebranded, redesigned, and
relocated version of an online institution once called Andrew Jackson
University. Breaking a Traditional Mold
To build the software, Mr. Wade looked beyond the
traditional world of educational technology, recruiting developers from
companies like Google. Signing up for the university feels more like
creating an account with a Web platform like Facebook than the laborious
process of starting a traditional program—in fact, New Charter lets you join
with your Facebook ID. Students, whether paying or not, start each class by
taking an assessment to establish whether they're ready for the course and
what material within it they need to work on. Based on that, the system
creates a pathway to guide them through the content. They skip stuff that
they already know.
That was part of the appeal for Ruben Fragoso, who
signed up for New Charter's M.B.A. program three weeks ago after stumbling
on the university while Googling for information about online degrees. Mr.
Fragoso, 53, lives in Albuquerque and works full time as a logistics
coordinator for a solar power company. The Mexican-born father of two earned
a bachelor's degree 12 years ago from Excelsior College. With New Charter,
he mostly teaches himself, hunkering down in his home office after dinner to
read and take quizzes. By week three, he hadn't interacted with any other
students, and his instructor contact had been limited to a welcome e-mail.
That was fine by him.
He likes that he can adjust his schedule to
whatever fits—one course at a time if a subject is tough, or maybe three if
he prefers. His company's education benefits—up to $5,000 a year—cover the
whole thing. With years of business experience, he appreciates the option of
heading quickly to a final test on a subject that is familiar to him.
Continued in article
US News Rankings ---
http://www.usnews.com/rankings
US News Top Online Education Programs ---
http://www.usnews.com/education/online-education
Do not confuse this with the US News project to evaluate for-profit universities
--- a project hampered by refusal of many for-profit universities to provide
data
'Honor Roll' From 'U.S. News' of Online Graduate Programs
in Business
Institution |
Teaching
Practices and Student Engagement |
Student
Services and Technology |
Faculty
Credentials and Training |
Admissions
Selectivity |
Arizona State U., W.P. Carey School of Business |
24 |
32 |
37 |
11 |
Arkansas State U. |
9 |
21 |
1 |
36 |
Brandman U. (Part of the Chapman U. system) |
40 |
24 |
29 |
n/a |
Central Michigan U. |
11 |
3 |
56 |
9 |
Clarkson U. |
4 |
24 |
2 |
23 |
Florida Institute of Technology |
43 |
16 |
23 |
n/a |
Gardner-Webb U. |
27 |
1 |
15 |
n/a |
George Washington U. |
20 |
9 |
7 |
n/a |
Indiana U. at Bloomington, Kelley School of Business |
29 |
19 |
40 |
3 |
Marist College |
67 |
23 |
6 |
5 |
Quinnipiac U. |
6 |
4 |
13 |
16 |
Temple U., Fox School of Business |
39 |
8 |
17 |
34 |
U.
of Houston-Clear Lake |
8 |
21 |
18 |
n/a |
U.
of Mississippi |
37 |
44 |
20 |
n/a |
Source: U.S. News & World
Report
Jensen Comment
I don't know why the largest for-profit universities that generally provide more
online degrees than the above universities combined are not included in the
final outcomes. For example, the University of Phoenix alone as has over 600,000
students, most of whom are taking some or all online courses.
My guess is that most for-profit universities are not forthcoming with the
data requested by US News analysts. Note that the US News
condition that the set of online programs to be considered be regionally
accredited does not exclude many for-profit universities. For example, enter in
such for-profit names as "University of Phoenix" or "Capella University" in the
"College Search" box at
http://colleges.usnews.rankingsandreviews.com/best-colleges/university-of-phoenix-20988
These universities are included in the set of eligible regionally accredited
online degree programs to be evaluated. They just did not do well in the above
"Honor Roll" of outcomes for online degree programs.
For-profit universities may have shot themselves
in the foot by not providing the evaluation data to US News for online
degree program evaluation. But there may b e reasons for this. For example, one
of the big failings of most for-profit online degree programs is in
undergraduate "Admissions Selectivity."
Bob Jensen's threads on distance education training and education
alternatives are at
http://faculty.trinity.edu/rjensen/Crossborder.htm
Bob Jensen's threads on ranking controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
Bob Jensen's threads on distance education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
For-Profit Universities Operating in the Gray
Zone of Fraud ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud
Critical Thinking Badges for Brains That Do Not Have Course Content
Competency
"Online Course Provider, StraighterLine, to Offer Critical-Thinking Tests to
Students," by Jeff Selingo, Chronicle of Higher Education, January
19, 2012 ---
Click Here
http://chronicle.com/blogs/wiredcampus/online-course-provider-straighterline-to-offer-critical-thinking-tests-to-students/35092?sid=at&utm_source=at&utm_medium=en
As
alternatives to the college diploma have been
bandied about recently, one question always seems to emerge: How do you
validate badges or individual classes as a credential in the absence of a
degree?
One company that has been hailed by some as
revolutionizing introductory courses might have an answer.
The company, StraighterLine,
announced on Thursday that beginning this fall it
will offer students access to three leading critical-thinking tests,
allowing them to take their results to employers or colleges to demonstrate
their proficiency in certain academic areas.
The tests—the Collegiate Learning Assessment,
sponsored by the Council for Aid to Education, and the Proficiency Profile,
from the Educational Testing Service—each measure critical thinking and
writing, among other academic areas. The iSkills test, also from ETS,
measures the ability of a student to navigate and critically evaluate
information from digital technology.
Until now, the tests were largely used by colleges
to measure student learning, but students did not receive their scores.
That’s one reason that critics of the tests have
questioned their effectiveness since students have
little incentive to do well.
Burck Smith, the founder and chief executive of
StraighterLine, which offers online, self-paced introductory courses, said
on Thursday that students would not need to take classes with StraighterLine
in order to sit for the tests. But he hopes that, for students who do take
both classes and tests, the scores on the test will help validate
StraighterLine courses.
StraighterLine doesn’t grant degrees and so can’t
be accredited. It depends on accredited institutions to accept its credits,
which has not always been an easy task for the company.
“For students looking to get a leg up in the job
market or getting into college,” Mr. Smith said, “this will give them a way
to show they’re proficient in key academic areas.”
Jensen Comment
Jensen Comment
College diplomas might be obtained in three different scenarios:
- Traditional College Courses
Students take onsite or online courses that are graded by their instructors.
- Competency-Based College Courses
Students take onsite or online courses and are then given competency-based
examinations.
Examples include the increasingly popular Western Governors University and
the Canada's Chartered Accountancy School of Business (CASB).
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
- Competency-Based College Courses That Never Meet or Rarely Meet
Students might study from course materials and videos in classes that do not
meet or rarely meet with instructors.
In the 1900s the University of Chicago gave degrees to students who took
only examinations to pass courses.
In current times BYU teaches the first two accounting courses from variable
speed video disks and then administers competency-based examinations.
The University of New Hampshire now is in the process of developing a degree
program for students who only competency-based examinations to pass courses.
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#NoInstructors
Recently, there are increasingly popular certificates of online "attendance"
in courses that do not constitute college credits toward diplomas. MIT is
providing increasingly popular certificates ---
"Will MITx Disrupt Higher Education?" by Robert Talbert, Chronicle of
Higher Education, December 20, 2011 ---
http://chronicle.com/blognetwork/castingoutnines/2011/12/20/will-mitx-disrupt-higher-education/?sid=wc&utm_source=wc&utm_medium=en
MITx Open Sharing Wonder
"MIT Mints a Valuable New Form of Academic Currency," by Kevin Carey,
Chronicle of Higher Education, January 22, 2012 ---
http://chronicle.com/article/MIT-Mints-a-Valuable-New-Form/130410/?sid=wc&utm_source=wc&utm_medium=en
There are no admission requirements or prerequisites to enroll in these online
courses. Presumably the only tests of competency might be written or oral
examinations of potential employers. For example, if knowledge of Bessel
Functions is required on the job, a potential employer might determine in one
way or another that the student has a competency in Bessel Functions ---
http://en.wikipedia.org/wiki/Bessel_Functions
In all the above instances, a student's transcript is based upon course
content whether or not the student takes courses and/or competency-based
examinations in the content of those courses.
StraighterLine's new certificates based upon "Critical-Thinking Tests" is an
entirely different concept. Presumably the certificates no longer are rooted
on knowledge of content. Rather these are certificates based upon critical
thinking skills in selected basic courses such as a writing skills course.
In my opinion these will be a much harder sell in the market. Whereas a
potential employer can assess whether an applicant has the requisite skills in
something like Bessel Functions, how does an employer or college admissions
officer verify that StraightLine's "Critical-Thinking Tests" are worth a diddly
crap and, if so, what does passing such tests mean in terms of job skills?
Thus far I'm not impressed with Critical Thinking Certificates unless they
are also rooted on course content apart from "thinking" alone.
Bob Jensen's threads on the BYU Variable Speed Video Courses ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Bob Jensen's threads on open sharing courses. lectures, videos, tutorials,
and course materials from prestigious universities ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Bob Jensen's threads on online training and education alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
Bob Jensen's threads on higher education controversies ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"A Russian University Gets Creative Against Corruption: With
surveillance equipment and video campaigns, rector aims to eliminate bribery at
Kazan State," by Anna Nemtsova, Chronicle of Higher Education, January
17, 2010 ---
http://chronicle.com/article/A-Russian-University-Gets/63522/
A student walks down the hallway of a university
building and, in a stroke of luck, finds a 1,000-ruble bill lying on the
floor. As he bends down to grab it, an idea crosses his mind.
"That is going to be just enough to pay for my
exam!" he exclaims.
Then the figure of a man in a suit blocks the light
over the squatting student.
"No it won't!" the man says, shaking his head.
In the next moment, the student is literally kicked
out of the university, his official file flying down the stairs behind him.
This bit of melodrama is not an exam-time
nightmare, but a video by students at Kazan State University. They are part
of an unusual campaign to stamp out corruption on the campus. Too many
students and professors have a "pay to play" mentality, reformers say, in
which grades and test scores are bought and sold.
Anticorruption videos are shown daily. Students
participate in classroom discussions about the problem. Kazan State's
rector, Myakzyum Salakhov, has installed video cameras in every hallway and
classroom, so that the security department can watch students and professors
in every corner of the university to catch any bribes as they are made.
"Our job is to change the attitude to corruption at
our university, so all students and professors realize that corruption is
damaging our system of education, that corruption should be punished," says
Mr. Salakhov, who is outspoken, both on campus and off, about the challenges
that Russian higher education faces on this front.
"We are working on creating a new trend on our
campus," he says. "Soon every student giving bribes or professor making
money on students will feel ashamed."
Across Russia, bribery and influence-peddling are
rife within academe. Critics cite a combination of factors: Poor salaries
lead some professors to pocket bribes in order to make ends meet. Students
and their families feel they must pay administrators to get into good
universities, if only because everyone else seems to be doing it. And local
government officials turn a blind eye, sometimes because they, too, are
corrupt.
"Corruption has become a systemic problem, and we
therefore need a systemic response to deal with it," Russia's president,
Dmitry Medvedev, said last June.
Last fall a federal law-enforcement operation
called Education 2009 reported that law-enforcement officials had uncovered
3,117 instances of corruption in higher education; of those, 1,143 involved
bribes. That is a 90-percent increase over the previous year.
Law-enforcement agencies prosecuted 265 university employees for taking
bribes.
But while many Russians shrug their shoulders over
this news—reports on corruption in higher education are hardly new—Kazan
State decided to do something about it.
The 200-year-old institution in southwestern
Russia, which educated Leo Tolstoy and Vladimir Lenin, among others, is
considered among the best universities in Russia. It enrolls 14,000
full-time students, most of whom come from the nearby Volga River region of
the country.
Grades for Sale Students and administrators alike
say that bribery is rampant on the campus, and that it includes everyone
from students to department chairs.
"Corruption is just a routine we have to deal
with," says Alsu Bariyeva, a student activist and journalism major who
joined the campaign after a professor in the physical-culture department
suggested that she pay him to get credit for her work that semester. She
paid.
Several students said they once saw a list of
prices posted in the hallway of the law department. The cost of a good grade
on various exams ranged from $50 to $200. Students from other departments
report similar scenarios.
Many people on the campus identify the arrest last
March of the head of the general-mathematics department as a turning point.
Police, tipped off by students and parents, charged in and arrested Maryan
Matveichuk, 61, as he was pocketing thousands of rubles from a student for a
good mark on a summer exam.
The police investigation concluded that in at least
six instances Mr. Matveichuk, a respected professor, had accepted bribes of
4,000 to 6,000 rubbles, or about $135 to $200, from students in other
departments for good grades on their math exams and courses.
Last September a court in Kazan found the math
professor guilty of accepting a total of 29,500 rubles, or $1,000, in
bribes, issued a suspended sentence of three years in prison, and stripped
him of his teaching credential.
Mr. Matveichuk's arrest inspired Mr. Salakhov, the
rector, to form an anticorruption committee, including administrators and
students.
"I personally believe that corruption sits in our
mentality," Mr. Salakhov says. "With students' help, I found three
professors taking bribes and asked them to leave. The committee's job is to
crack down on corruption within these walls."
Constant Surveillance Mr. Salakhov's right-hand man
in his fight against corruption is Gennady Sadrislamov, the deputy rector
responsible for campus security. A large computer screen on his desk
displays images from the cameras placed around the campus.
A former police colonel whose heavy figure appears
in the campus anticorruption videos, Mr. Sadrislamov says students are
crucial to the campaign's success.
"Matveichuk brought shame to our university, but
unfortunately, he was not the only one making money on the side," the deputy
rector says. "Corruption sits in everybody's head. We cannot eliminate the
idea of bribing and cheating without students' help."
With information provided by students and
professors, Mr. Sadrislamov goes to the rector to get investigations under
way. At least one professor volunteered to quit after he was confronted by
Kazan State's anticorruption council, which comprises the rector, his
deputies, the security department, and some students. The group meets
monthly to discuss the anticorruption campaign.
The security chief says it will take awhile to rid
the campus of corruption, because it is so ingrained.
"I do not believe that professors commit crime
because of their low salaries," he says. "They take bribes because it has
gone unpunished. That is the real picture in every Russian university all
across the country."
Russian professors' salaries are very low. At Kazan
State, they make 20,000 to 25,000 rubles a month, or about $667 to $833.
"That is not enough to feed the family. People
break the law out of need—they have no option," says one professor at the
university, who did not want his name to be used.
Students have mixed views about the corruption
campaign. In a conversation among a group of students from the law
department, considered to be among the most corrupt, many scoffed at talk of
reform.
"Law-enforcement agencies should reform first,"
said one student, who declined to give his name but said he was the son of
an agent in the Federal Security Service, a successor agency to the KGB.
"Russia is rotten of corruption. Even the president admits that. I do not
believe somebody could put the end to it on our campus."
The reformers seem undeterred by such skepticism.
"Some say we are too naïve to believe that the old
traditions can be changed; some avoid even talking to us. But there are
students who agree the disease can be treated," says Dmitry Modestov, a
third-year student who works with classmates on developing pens, fliers, and
other materials with anticorruption slogans.
"We are trying to change the mind-set on our
campus. We say, Knowledge is worth more than bribes."
A Reform Effort Backfires Efforts to combat
corruption on a national scale have so far failed to have much of an effect.
In 2001, Russia introduced an SAT-like test known
as the Unified State Exam. It was created in large measure to eliminate
corruption in the college-entrance process. Colleges were to rely primarily
on exam results in determining who should be admitted. Last year was the
first in which testing became obligatory nationally.
But instead of reducing corruption, the exam
apparently has fostered it. Claims arose that exam results were being
tampered with by local officials whose job it is to administer the test.
Another avenue of abuse is the so-called "discount"
for students with special needs and children of state employees.
Universities are obliged to accept lower scores on
the Unified State Exam from members of those groups, which comprise 153
categories, including handicapped students, children of Chernobyl victims,
and orphans.
The fixed price for obtaining the needed papers to
be labeled as a member of a discount group is 70,000 rubles, or $2,300, says
Deliara Yafizova, a first-year student at Kazan State.
"I entered without a bribe, but I heard that there
was a price for making life easier," she said one recent morning in the
campus cafe.
Mr. Salakhov, the rector, saw the problem firsthand
when he looked at the applicants for this year's first-year class. "All of a
sudden we had crowds of handicapped students applying to our university," he
says. "At one department I had 36 handicapped students per 30 available
seats. We tried to check every case, especially the cases where it said that
the disability expired in two to three months. Many of these disabled
children turned out to have parents working as hospital managers. Their
papers turned out fake."
Of the 1,358 full-time students admitted to Kazan
State this academic year, more than 250 were from discount categories.
"That is a tiny little opportunity for universities
to stay corrupt," says Mr. Salakhov. "If a big bureaucrat from, say, the
ministry of education sends his son with a letter of support to a rector,
the university might have to admit that son. But not at this university. We
do not let in students with just any score, no matter how high-rank their
parents are."
As for reporting scores themselves, state-exam
corruption has taken on absurd proportions, driven by regional bureaucrats'
desire to ensure that the scores of students admitted to local colleges are
better than average.
For example, students in Kabardino-Balkaria and
Ingushetia, areas of economic hardship and low-level insurgency near
Chechnya, achieved record scores last summer in the Russian-language exam.
Yet Russian is not the native language of most residents there.
In another instance, Lyubov Glebova, head of the
Federal Service for the Oversight of Education and Science, flew to
Voronezh, in the southern part of the country, as soon as she found out that
students' scores in the city were the highest on most of the seven parts of
the national exam.
"You are the country's leaders on Unified State
Exam results," she announced at the regional meeting of school and
higher-education authorities in Voronezh. Unaware that she was about to
accuse them of tampering with test scores, the crowd of local bureaucrats
applauded her statement.
Ms. Glebova fired the head of the regional
education authority, and several exam organizers will not be allowed to
continue in those roles this year.
Russia still lives with the Soviet mentality of
keeping information secret and presenting fake pictures of life, says
Yevgeny Yasin, director of research at the State University Higher School of
Economics, in Moscow. Even so, in a country where people tend to follow the
signals given by authorities, he is hopeful.
"It will take a little longer," he says, "but the
time of transparency will eventually come to the Russian education system,
as it did to many Western countries."
Continued in article
Jensen Comment
A more reliable and probably much cheaper alternative would be instead adopt
competency-based grading and degree awarding. Two North American universities
using competency-based courses are the accredited online undergraduate Western
Governors University (WGU) and
the Canadian masters degree program at Chartered Accounting School of Business (CASB).
Both programs have a reputation for integrity and toughness.
Competency-Based Learning ---
http://en.wikipedia.org/wiki/Western_Governors_University#Competency-Based_Learning
Educational
Competency Assessment (ECA) Web Site --- http://www.aicpa-eca.org/
The AICPA recently won a National Association of Colleges and Employers (NACE)
Excellence Award for Educational Programming for developing this ECA site to
help accounting educators integrate the skill-based competencies needed by
entry-level accounting professionals.
The AICPA provides this resource to help educators
integrate the skills-based competencies needed by entry-level accounting
professionals. These competencies, defined within the AICPA Core Competency
Framework Project, have been derived from academic and professional competency
models and have been widely endorsed within the academic community. Created by
educators for educators, the evaluation and educational strategies resources
on this site are offered for your use and adaptation.
The ECA site contains a LIBRARY that, in addition to
the Core Competency Database and Education Strategies, provides information
and guidance on Evaluating Competency Coverage and Assessing Student
Performance.
To assist you as you assess student performance and
evaluate competency coverage in your courses and programs, the ECA ORGANIZERS
guide you through the process of gathering, compiling and analyzing evidence
and data so that you may document your activities and progress in addressing
the AICPA Core Competencies.
Some years back the Texas State Board of Public Accountancy (TSBPA) declared
war on distance education by requiring a minimum of five semester courses (15
credits) of accounting onsite instead of online ---
http://www.cs.trinity.edu/~rjensen/temp/TexasBigBrother.htm
Large universities in Texas such as the University of Texas and Texas A&M
have extensive online degree programs in such areas in science and engineering,
but not in accountancy where very large and highly-rated onsite accounting
degree programs have shown virtually no interest in reaching out to students who
are unable to attend classes on campus. In fact, I've suspected for a long time
that these major universities have pressured the TSBPA to discourage distance
education.
Western Governors University ---
http://en.wikipedia.org/wiki/Western_Governors_University
WGU is a competency-based online university where course instructors do not
assign grades. Instead the grading is competency based much like professional
certification examinations such as the CPA Examination and medical board
examinations ---
http://faculty.trinity.edu/rjensen/assess.htm#ComputerBasedAssessment
"WGU Lassoes Texas," by Steve Kolowich, Inside Higher Ed,
August 4, 2011 ---
http://www.insidehighered.com/news/2011/08/04/governor_perry_partners_with_western_governors_university
Western Governors
University continued to live up to its name on
Wednesday, as Texas Governor Rick Perry
announced a partnership with the fast-growing
online institution — and was promptly showered with praise from nearly
everyone.
Western Governors, a regionally accredited,
nonprofit university founded in 1997 by 18 politicians who held that office
at that time, represents an
alternative model of higher education that has garnered both praise and
skepticism.
Aimed at working adults (the average student is
36), Western Governors confers bachelors and master’s degrees based on a
student’s ability to demonstrate skills. There are no classrooms and no
professors. Students learn online and mostly on their own, with light
guidance from their advisers. They take proctored tests at local testing
centers whenever they feel they are ready. Students pay tuition — between
$2,890 and $4,250, depending on the program — every six months until they
graduate, which 40 percent of them do within four years. (First-time,
full-time students are considerably less successful, graduating at a 22
percent rate.)
The partnership with Texas will create a
state-branded version of Western Governors called WGU-Texas. Texas is the
third state to create a local version of Western Governors, which is based
in Salt Lake City, Utah; Indiana Governor Mitch Daniels
created WGU-Indiana last summer, and the
Washington State legislature
voted WGU-Washington
into existence earlier this year.
Like Indiana and Washington, Texas will not
allocate any money out of its state budget to Western Governors, which
supports itself based on tuition. However, a Western Governors spokeswoman
says the university is currently working with Texas officials to allow Texas
residents to spend in-state financial aid grants on the Utah-based
institution.
Amid deep cuts to public higher education budgets,
Governor Perry earlier this year
challenged state institutions to come up with some
way to offer a four-year degree program for the total price of $10,000.
Alas, WGU-Texas is not the answer to that challenge, said Catherine Frazier,
a Perry spokeswoman. The average Western Governors graduate earns a degree
in 30 months, or five pay periods; including fees, that means $14,735 for
the least expensive degrees (information technology and business), and
$21,890 for the most expensive (nursing pre-licensure).
“But, certainly, having this affordable option does
prove that a degree can be offered by an institution at an affordable
price,” Frazier said.
In its effort to expand into various states,
Western Governors has faced criticism from some educators, particularly in
Washington state. “[B]rain research demonstrates that real learning requires
students to struggle with difficult material under the consistent guidance
of good teachers,” wrote Johann Neem, an associate professor of history at
Western Washington University, in an April
op-ed for The Seattle Times. “WGU denies
students these opportunities. In fact, its advertisements pander to
prospective students by offering them credit for what they already know
rather than promising to teach them something new.”
But advocates say the Western Governors model has
its place in the constellation of state higher education systems. For adult
students who possess the knowledge and skills to bypass a chunk of the
curriculum — either because they have some prior college or because they
have picked it up in their working lives — the competency-based model is a
good way to avoid the tedium and expense of sitting through redundant
classes, the Center for Adult and Experiential Learning has said.
“The idea is that these adult learners will bring
certain skills and knowledge to the table and that they [will] be able to
use them to accelerate progress toward an academic degree and advance in the
workforce,” said Dominic Chavez, a spokesman for the Texas Higher Education
Coordinating Board, in an e-mail. “While students will typically be able to
gain course credit for having specific knowledge in certain areas, students
reach a point at which they acquire new knowledge and skills beyond their
existing levels,” Chavez said. “These are the skills that take them to the
next level and that offer increased workforce opportunities.”
The WGU-Texas announcement met with glowing praise
elsewhere. The partnership “will help address our state's key workforce
needs while offering affordable career and continuing education
opportunities to Texans over 30," said State Senator Judith Zaffirini, a
Democrat who chairs the state senate’s higher education committee, in a
statement.
“This low-cost alternative will expand access to
more Texans, engaging our diverse student population and upholding our
statewide commitment to help more students reach their academic and lifelong
goals,” wrote the Texas Coalition for Excellence in Higher Education, a
group of former administrative heavyweights from the Texas higher ed system
who have challenged much of Governor Perry's higher education agenda.
Rey Garcia, president of the Texas Association of
Community Colleges, said his organization was planning a statewide
articulation agreement with WGU-Texas that would make it easy for students
to finish their bachelor’s degrees at Western Governors after two years at
community college. “The traditional universities don’t make it terribly easy
for students with an applied science degree [at a community college] to
transfer into a baccalaureate,” Garcia said in an interview. “WGU is a lot
more flexible in that regard.”
Garcia added that he is not worried students will
skip the community colleges altogether and opt for all four years at WGU-Texas
because “they’re considerably more expensive than we are.”
But Mary Aldridge Dean, executive director of the
Texas Faculty Association, said prospective students — especially younger
ones — should consider more than just the price tag when considering
enrolling at WGU-Texas.
Continued in article
Question
Why can't the highest scoring CPA Exam taker in the nation probably can't become
a licensed CPA in Texas?
Answer
Because in Texas, unlike the other 49 states, nobody can become a CPA without
having taken at least five accounting courses onsite. Distance education
graduates need not apply for a CPA certificate if they have distance education
degrees and/or did not take about half of the required accounting, auditing, and
tax courses onsite instead of online.
In effect this means that Texas does not allow full distance education
accounting degrees such that even flagship universities like Texas and Texas A&M
like flagship universities in Connecticut, Wisconsin, and Maryland have distance
education accounting degrees.
March 31, 2011 message from Barbara Scofield
In the state of Texas educators are struggling with
ever more onerous rules for candidacy. The AICPA, however, seems to be
ignoring issues that loom large for the TSBPA. One of their newly featured
"new CPAs" at the link below is an award winner from Colorado (not a 150
hour state) who took her accounting courses online (Texas requires 15 credit
hours face to face of upper division accounting courses) from DeVry.
http://www.thiswaytocpa.com/exam-licensure/exam-diary/leslie-rezgui/
Could this person work as a CPA in Texas?
Barbara W. Scofield, PhD, CPA
Chair of Graduate Business Studies
Professor of Accounting
The University of Texas of the Permian Basin
4901 E. University Dr. Odessa, TX 79762
432-552-2183 (Office)
November
5,. 2010 reply from Bruce Lubich <BLubich@umuc.edu>
Note that Bruce is the Director of an online accounting distance education
program in the University of Maryland System
Hi Bob,
When TX first went to the 15 credit requirement, we
had a couple of University of Maryland University College students apply
for the exam there, and be rejected. Our transcript doesn't show which
courses were taken online. Apparently it's on the TX paperwork. Lying on
that is not something to be encouraged for future CPAs. So, unless a
student has no desire to sit for the CPA exam or they just need to fill in
a few holes to qualify, the TX market has dried up for all online programs.
Evidently, the TX board takes this requirement
very seriously, so my guess is that your Deloitte hire would be denied the
ability to sit. Seems to me Deloitte would need to send the student to a
different office until they pass the exam. As for reciprocity, I haven't
heard of any problems. That doesn't mean they're not out there, but I
haven't heard of them. Bottom line is TX has protected their investment in
their brick & mortar schools. At one time LA and New Mexico had similar,
though weaker rules like this. I believe both have woken up and done away
with those rules.
Bruce Lubich
University of Maryland University College
November 6, 2010 reply from Bob Jensen
Hi Bruce,
Thanks for this.
What you are saying is that the Texas Board may be cooperating with Texas
Universities to reserve all entry-level accounting jobs in Texas for only
graduates of Texas universities. Graduates from your program in the
University of Maryland system can, thereby, not compete for jobs in Texas
CPA firms. .
Out-of-state graduates need not apply. Seems like a great idea for the
other 49 states so that graduates of a given state have a monopoly on jobs
within the state. Of course the national and international CPA firms might
object to complications this creates in hiring. And students who want to
leave a state might object to not having jobs available anywhere other than
the state where they graduated.
Why didn't the European Union think of this as a clever way of
restricting labor flows between borders?
Bob Jensen
My threads (rough draft notes) on this antiquated and absurd ruling by the
TSBPA (read that Big Brother) can be found at
http://www.cs.trinity.edu/~rjensen/temp/TexasBigBrother.htm
Online Education Effectiveness and Testing
Respondus Testing Software
October 13, 2009 message from Richard Campbell
[campbell@RIO.EDU]
For anyone teaching online, this software is
a "must-have". They have released a new (4.0) version with improved
integration of multimedia. Below are some videos (created in Camtasia) that
demonstrate key features of the software.
http://www.respondus.com/
They have tightened up the integration with
publisher test banks.
Richard J. Campbell
mailto:campbell@rio.ed
May 20, 2010 message from Richard Campbell
[campbell@RIO.EDU]
Respondus is a very powerful test generator and most
publishers provide test banks in that format.
http://www.screencast.com/t/NTdlNzAw
Richard J. Campbell
School of Business
218 N. College Ave.
University of Rio Grande
Rio Grande, OH 45674
Voice:740-245-7288
http://faculty.rio.edu/campbell
Bob Jensen's threads on tools and tricks of the trade ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
Learning Effectiveness in Corporate Universities
A
group of colleges that serve adult students on Monday
formally announced their effort
to measure and report their effectiveness, focusing on outcomes in specific
programs. The initiative known as “Transparency by Design,
on which Inside Higher Ed reported earlier,
has grown to include a mix of 10 nonprofit and for-profit institutions: Capella
University, Charter Oak State College, Excelsior College, Fielding Graduate
University, Franklin University, Kaplan University, Regis University, Rio Salado
College, Western Governors University, and Union Institute & University.
Inside Higher Ed, October 23, 2007 ---
http://www.insidehighered.com/news/2007/10/23/qt
"Cheating in Online Courses," Dan Ariely, August 2012 ---
http://danariely.com/2012/08/10/cheating-in-online-courses/
Jensen Comment
f there is more cheating in online courses, the fault lies with the internal
controls of the online system rather than the difference between online versus
onsite systems per se. Cheating is largely the fault of the online and
onsite instructors and their universities. There are controls (not costless) to
reduce online cheating to levels below those of onsite courses ---
http://faculty.trinity.edu/rjensen/Assess.htm#OnlineOffCampus
For example, observing a student taking an online test can be more one-on-one
observation with the proper Webcam procedures or with hiring the Village Vicar
to or a Sylvan Systems to proctor the examination.
Another approach is to outsource proctoring to local K-12 teachers.
Respondus Monitor - online exams proctor ---
http://youtu.be/lGyc_HBchOw
One of the selling points of for-profit universities is that they are more
open to non-traditional students vis-à-vis nonprofit traditional colleges and
universities. This is thus a "diversity" selling point for for-profit
universities.
However, one of the drawbacks is that when traditional colleges and
universities attempt to be more open to diversity and admission of
non-traditional students, there are huge problems of enforcing academic
standards and serious possibilities that most of the non-traditional students
will not graduate.
Here's how some for-profit universities deal unethically with assessment
issues. It's a small wonder that for-profit universities are very popular with
non-traditional students.
"Undercover Probe Finds Lax Academic Standards at Some For-Profit Colleges,"
by Kelly Field, Chronicle of Higher Education, November 22, 2011 ---
http://chronicle.com/article/Undercover-Probe-Finds-Lax/129881/?sid=wc&utm_source=wc&utm_medium=en
An undercover investigation by the Government
Accountability Office has found evidence of lax academic standards in some
online for-profit programs.
The probe, which is described in a
report
made public Tuesday, found that staff at six of the 12
colleges that enrolled the investigators tolerated plagiarism or awarded
credit for incomplete or shoddy work.
The release of the report, "For-Profit Schools:
Experiences of Undercover Students Enrolled in Online Classes at Selected
Colleges," comes roughly a year after the accountability office
revised an earlier report on recruiting abuses at
for-profit colleges, acknowledging errors and omissions in its findings. A
coalition of for-profit colleges has
sued the office over that report, accusing its
investigators of professional malpractice.
In that earlier investigation, the office sent
undercover investigators to 15 for-profit colleges to pose as prospective
students. It
found widespread deception in recruiting by the
colleges, with many employees providing students with false or misleading
information about graduation rates, job prospects, or earning potential.
This time, the agents attempted to enroll in online
programs at 15 for-profit colleges using a home-school diploma or a diploma
from a closed high school. Twelve of the colleges accepted them.
The "students" then proceeded to skip class,
plagiarize, and submit "substandard" work. Though several ultimately failed
their classes, some got credit for shoddy or plagiarized work along the way.
At one college, a student received credit for six
plagiarized assignments; at another, a student submitted photos of political
figures and celebrities in lieu of an essay, but still earned a passing
grade. A third student got full credit on a final project, despite
completing only two of the three required components. That same student
received full credit for an assignment that had clearly been prepared for
another class.
In two cases, instructors confronted students about
their repeated plagiarism but took no disciplinary action against them. One
student received credit for a response that was copied verbatim from other
students' discussion posts.
Instructors at the other six colleges followed
their institutions' policies on grading and plagiarism, and in some cases
offered to help students who appeared to be struggling.
All of the students ultimately withdrew or were
expelled from the programs. Three of the colleges failed to provide the
departing students with federally required exit counseling about their
repayment options and the consequences of default.
Sen. Tom Harkin, Democrat of Iowa, who requested
the report, said its findings "underscore the need for stronger oversight of
the for-profit education industry."
"It is obvious that Congress must step in to hold
this heavily federally subsidized industry more accountable," he said.
Continued in article
Jensen Comment
This makes me wish that similar investigations (audits?) be expanded to huge
samples of nonprofit colleges and universities where
grade inflation is also rampant.
Most universities now have financial internal auditors and are subjected to
governmental or independent CPA audits. But few have independent audits of the
variability in academic standards between departments and between individual
faculty members.
Bob Jensen's threads on For-Profit Universities Operating in the Gray Zone
of Fraud ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud
November 28, 2011 reply from David Albrecht
Bob, I agree with your comment that the study could
have been expanded. As it is, the study is hardly scientific. The sample
size is small, and we have no idea whether lax standards, instructor
negligence, or instructor mercy are responsible for the actions. In
traditional schools, whether they be state funded or private, I wonder if
more abuses would be found among tenure-track or non-tenure-track profs.
Dave Albrecht
November 28, 2011 reply from Bob Jensen
Hi David,
In my opinion, grade inflation and lax academic standards may be more of a
problem for tenured professors than probationary (non-tenured) professors on
tenure track and maybe even for adjunct professors (but adjuncts are so
variable it's hard to draw generalizations).
I will provide an example of non-tenured faculty who are on tenure tracks at
Trinity University. Such probationary faculty are under severe scrutiny by
their immediate departmental faculty and upper-level university committees.
There's heavy pressure on all faculty involved to warn probationary faculty
about inadequate versus adequate progress toward tenure. The hope is that
all nontenured faculty not making adequate progress by year six will have
been terminated such that all faculty going up for tenure have highly
probable chances of not being rejected.
Included in what Trinity calls "probationary reviews" as well as final
"tenure applications" are teaching evaluations, grading distributions for
each course, copies of examinations in each course, copies of course
syllabi, and
self-review statements of candidates. There are also external (off-campus)
reviews in tenure applications, but these are mostly focused on research and
publication.
Tenured faculty are not subjected to such rigorous reviews, and hence a few
tenured faculty in my viewpoint become more lax about academic standards.
Hopefully these are just outliers. There is a rigorous review of associate
professors at times when they apply for full professorships. These are much
like tenure applications and require a truckload of teaching evaluations,
grading distributions for each course, copies of examinations in each
course, copies of course
syllabi, and
self-review statements of candidates. There are also external (off-campus)
reviews in full-professorship applications, but these are mostly focused on
research and publication.
In my 24 years at Trinity University I was completely surprised by
proportion of hired tenure track faculty that were terminated before even
reaching the tenure application stage. I was also even more surprised by
some of the tenure applicants and full-professor applicants who were
rejected by the P&T Committee and/or the President of the University.
I was also surprised in some years by the some of the long-term tenured
faculty (some of whom were lifetime associate professors) who had their
tenure contracts bought out by deals made with the President of the
University. In some cases those buyouts were either for lackluster teaching
and/or lackluster academic standards.
Of course there were also a few faculty members who had some other
dysfunctional behavior leading to buyouts. One of my friends had an early
onset of dementia and was somewhat of a problem even after termination (on a
generous early retirement package), because he continued to hang around
computer labs and the campus library and showed off his vanity press
"research" book that was garbage to the point of embarrassment. He claimed
that proper exercise could prevent all forms of cancer.
Some campus officials and faculty, including me, breathed a sigh of relief
when he eventually died and stopped giving his vanity press book away for
free around Texas.
Of course there are also those who will breathe a sigh of relief when one of
their retired faculty members stops sending so many messages to the AECM.
Respectfully,
Bob Jensen
"The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning," (Not Free),
Chronicle of Higher Education, November 2011 ---
https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc
The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning.
As online learning spreads throughout higher
education, so have calls for quality control and assessment. Accrediting
groups are scrambling to keep up, and Congress and government officials
continue to scrutinize the high student-loan default rates and aggressive
recruiting tactics of some for-profit, mostly online colleges. But the push
for accountability isn't coming just from outside. More colleges are looking
inward, conducting their own self-examinations into what works and what
doesn't.
Also in this year's report:
- Strategies for teaching and doing research
online
- Members of the U.S. military are taking online
courses while serving in Afghanistan
- Community colleges are using online technology
to keep an eye on at-risk students and help them understand their own
learning style
- The push to determine what students learn
online, not just how much time they spend in class
- Presidents' views on e-learning
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on online course and degree programs ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning," (Not Free),
Chronicle of Higher Education, November 2011 ---
https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc
The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning.
As online learning spreads throughout higher
education, so have calls for quality control and assessment. Accrediting
groups are scrambling to keep up, and Congress and government officials
continue to scrutinize the high student-loan default rates and aggressive
recruiting tactics of some for-profit, mostly online colleges. But the push
for accountability isn't coming just from outside. More colleges are looking
inward, conducting their own self-examinations into what works and what
doesn't.
Also in this year's report:
- Strategies for teaching and doing research
online
- Members of the U.S. military are taking online
courses while serving in Afghanistan
- Community colleges are using online technology
to keep an eye on at-risk students and help them understand their own
learning style
- The push to determine what students learn
online, not just how much time they spend in class
- Presidents' views on e-learning
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on online course and degree programs ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"Keeping an Eye on Online Students," by Andrea L. Foster, Chronicle of
Higher Education, July 21, 2008 ---
http://chronicle.com/wiredcampus/index.php?id=3181&utm_source=wc&utm_medium=en
Technology vendors are eager to sell college
officials hardware and software designed to verify the identify of online
students—and thereby prevent cheating. A free article in The Chronicle
describes some of the technologies that colleges are trying out to make
certain that the person taking an online exam is, in fact, the student
enrolled in the course. The technologies include Web cameras that watch
students taking tests and scanners that capture students’ fingerprints.
A provision in a bill reauthorizing the Higher
Education Act is fueling much of the interest in this issue. A paper
released in February by the Western Interstate Commission for Higher
Education says the provision—while not onerous to most distance-learning
providers—could “drive up the cost of these important education programs.”
And some online institutions fear that the
provision would require them to have their students travel to distant
locations to take proctored exams on paper. The result? Some states would
conclude that the institutions have a “physical presence” in their states,
and would subject the institutions to “a whole new set of state
regulations,” says John F. Ebersole, president of Excelsior College.
Question
What are some of the features of UserView from TechSmith for evaluating student
learning
Some of the reviews of the revised “free” Sound Recorder in Windows Vista are
negative. It’s good to learn that Richard Campbell is having a good experience
with it when recording audio and when translating the audio into text files ---
http://microsoft.blognewschannel.com/archives/2006/05/24/windows-vista-sound-recorder
For those of you on older systems as well as Vista there is a free recorder
called Audacity that I like ---
http://audacity.sourceforge.net/
I really like Audacity. There are some Wiki tutorials at
http://audacity.sourceforge.net/help/tutorials
Some video tutorials are linked at
http://youtube.com/results?search_query=audacity+tutorial&search=Search
I have some dated threads on speech recognition at
http://faculty.trinity.edu/rjensen/speech.htm Mac users can find options
at http://www.macspeech.com/
In addition, I like Camtasia (recording screen shots and camera video) and
Dubit (for recording audio and editing audio) from TechSmith ---
http://www.techsmith.com/
TechSmith products are very good, but they are not free downloads.
UserView ---
http://www.techsmith.com/uservue/features.asp
TechSmith has a newer product called UserView that really sounds exciting,
although I’ve not yet tried it. It allows you to view and record what is
happening on someone else’s computer like a student’s computer. Multiple
computers can be viewed at the same time. Images and text can be recorded.
Pop-up comments can be inserted by the instructor to text written by students.
UserView can be used for remote testing!
Userview offers great hope for teaching disabled students such as sight
and/or hearing impaired students ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Handicapped
"Ways to prevent cheating on online exams," by Gail E. Krovitz,
eCollege Newsletter, Vol 8, Issue 6 November 15, 2007 ---
http://www.ecollege.com/Educators_Voice.learn
- Write
every exam as if it is open book. As much as we try to
convince ourselves otherwise, we need to assume that students
use resources on their exams (the book, Internet search engines
and so on) and write our exams accordingly. Are all of our
questions asking for information that can be gathered quickly
from the textbook or from a simple Internet search? Then we
should re-think our questions (see following guideline).
Open-book exams have the potential to test higher level thinking
skills, instead of just memorizing facts. Unfortunately, scores
on open-book exams are often lower, as students don’t take exam
preparation as seriously when they know they can use their book,
so training in open-book exam-taking skills would be helpful
(Rakes).
- Write
effective multiple-choice exam questions. Because it is
so easy to use prohibited materials during online exams, it is
foolish to design tests that simply test factual information
that is easily looked up. Although it is difficult to do, online
exams are most effective when they test higher order thinking
skills (application, synthesis and evaluation) and ask questions
that cannot be answered by glancing at the book or a quick
internet search. See Christe, Dewey and Rohrer for more
information about developing quality multiple-choice questions.
- Set
tight time limits per question. Even with open book
exams (and especially for ones that are not open book), it is
important to give a tight time frame for the test, so students
will not have time to look up each question in the book. The
time limit chosen will obviously vary depending on subject
matter, type of questions asked, etc. For strict fact recall,
instructors might start by giving a total time based on allowing
60- 90 seconds per question and then adjusting as necessary
based on their student body. More time would need to be given
for higher-level thinking questions or for those involving
calculations.
- Use
large question pools to offer different, randomly-selected
questions to each student. See “Tip: getting the most
out of exam question pools” for a good description of using
question pools in the eCollege system. The question pools must
be large enough to minimize overlap of questions between tests.
Rowe provides a chart comparing the average number of questions
in common for two students with different question pool sizes
and different numbers of questions drawn from the pool. For
example, 5 questions drawn from a pool of 10 questions results
in 2.5 questions in common between two students, while 5
questions drawn from a pool of 25 questions results in only 1
question in common between two students. You can consult the
mathematical formula or go with common sense: a larger question
pool is better for reducing the likelihood that students will
get the same questions.
-
Manually create different versions of the exam with the same
general question pools, but with scrambled answers for each
question. For example, in one version of the exam, the
correct answer could be B, while the answer choices are
scrambled in the other version so the correct answer is D. You
could use the Group function to assign half of the class to one
exam, and the other half the class to the other one. Cizek cites
research showing that scrambling questions and answer choices
does reduce cheating, while simply changing the order of the
same questions does not reduce cheating. In fact, in a study of
student’s perceived effectiveness of cheating prevention
strategies, having scrambled test forms was the number one
factor perceived by students to prevent cheating (Cizek).
- Assign
a greater number of smaller tests instead of one or two large
ones. This reduces the incentive to cheat, as each test
isn’t as likely to make or break a student’s grade; the pressure
of the midterm and final-only structure in some classes is a
strong incentive to cheat on those exams. Also, this increases
the logistical difficulties of cheating if a student is relying
on someone else to help them or to take the test for them.
- Provide
a clear policy for what happens if students cheat… and enforce
it! There are many important things instructors can do
from this perspective, such as discussing what constitutes
cheating, the importance of academic honesty, any honor codes in
place, what measures will be in place to prevent and detect
cheating and the punishments for cheating. If students perceive
that the instructor does not care about cheating, then incidents
of both spontaneous and planned cheating increase (Cizek).
Students know that most cheaters don’t get caught and that
punishments aren’t harsh for those who do get caught (Kleiner
and Lord). Research has found that punishment for cheating is
one of the main deterrents to cheating (Kleiner and Lord).
- Set the
exam Gradebook Review Date for after the exam has closed.
The Gradebook Review Date is when the students can access their
graded exam in the Gradebook. If this date is set before the end
of the exam, students who take the exam early could access their
exam in the Gradebook (and usually the correct answers as well)
and distribute the questions to students who would take the exam
later.
- Revise
tests every term. Sooner or later exam questions are
likely to get out into the student world and get distributed
between students. This is especially possible when students view
their graded exams in the Gradebook, as they have all the time
in the world to copy or print their questions (usually with the
correct answers provided). Periodic changes to the test bank can
help minimize the impact of this. Minor changes such as
rewording the questions and changing the order of answers
(especially if different versions with scrambled answers are not
used) can help extend the useful life of a test bank.
- Use
ExamGuardTM if the feature is available at
your school. ExamGuard prohibits the following actions while
students are taking online exams: printing, copying and pasting
anything into or from the assessment, surfing the Web, opening
or using other applications, using Windows system keys functions
or clicking on any other area within the course. Also note that
ExamGuard prohibits students from printing or copying exam
materials while viewing the exam in the Gradebook. If you are
interested in learning more about ExamGuard, please contact your
Account Executive or Client Services Consultant.
- Give
proctored exams in a traditional classroom. While this
is not an option for many online courses, it is a route that
some schools take, especially if they largely serve a local
population. With proctored exams, instructors feel more in
control of the testing environment and more able to combat
cheating in a familiar classroom setting (or at least to have
cheating levels on par with those seen in a traditional exam
setting). In a study on cheating in math or fact-based courses,
Trenholm concludes that proctoring is “the single greatest tool
we presently have to uphold the integrity of the educational
process in instruction in online MFB (math or fact based)
courses” (p. 297). Also, Cizek showed that attentive proctoring
reduced cheating directly and by giving the impression that
academic integrity is valued.
December 1, 2007 reply from Charles Wankel
[wankelc@VERIZON.NET]
Thanks Bob for sharing.
Some of the points seem to fall back to
face-to-face course ideas but others were very helpful. I found the emphasis
on higher order thinking skills (application, synthesis and evaluation) to
be a great one. I am going to try to work on putting synthesis into my
students’ assignments and projects.
Charlie Wankel
St. John’s University,
New York
December 1, 2007 reply from David Raggay
[draggay@TSTT.NET.TT]
Please be so kind as to refer me to the specific
article or articles wherein I can find a discussion on “higher order
thinking skills (application, synthesis and evaluation)”
Thanks,
David Raggay,
IFRS Consultants,
Trinidad and Tobago
December 1, 2007 reply from Bob Jensen
Hi David,
There are several tacks to take on this question. Charlie provides some
key words (see above).
I prefer to think of higher order metacognition ---
http://en.wikipedia.org/wiki/Metacognition
For specific examples in accounting education see
http://faculty.trinity.edu/rjensen/265wp.htm
One of the main ideas is to make students do their own discovery learning.
Blood, sweat, and tears are the best teachers.
Much of the focus in metacognitive learning is how to examine/discover
what students have learned on their own and how to control cheating when
assessing discovery and concept learning ---
http://faculty.trinity.edu/rjensen/assess.htm
Higher order learning attempts to make students think more conceptually.
In particular, note the following quotation from Bob Kennelly at
http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge
We studied whether instructional material that connects accounting
concept discussions with sample case applications through hypertext links
would enable students to better understand how concepts are to be applied to
practical case situations.
Results from a laboratory experiment indicated that students who learned
from such hypertext-enriched instructional material were better able to
apply concepts to new accounting cases than those who learned from
instructional material that contained identical content but lacked the
concept-case application hyperlinks.
Results also indicated that the learning benefits of concept-case
application hyperlinks in instructional material were greater when the
hyperlinks were self-generated by the students rather than inherited from
instructors, but only when students had generated appropriate links.
Along broader lines we might think of it in terms of self-organizing of
atomic-level knowledge ---
http://en.wikipedia.org/wiki/Self-organization
Issues are still in great dispute on the issues of over 80 suggested
“learning styles” ---
http://en.wikipedia.org/wiki/Learning_styles
Assessment and control of
cheating are still huge problems.
Bob Jensen
December 2, 2007 reply from Henry Collier
[henrycollier@aapt.net.au]
G’day Bob …
I’m not sure whether David is asking for the Bloom citation or not. I do not
disagree with your post in any way, but wonder if David is looking for the
‘start’ of the art/science. I have also suggested that he may want to look
at Bob Gagne’s approach to the same issues. Perhaps William Graves Perry’s
1970 book could / would also be useful.
Best
regards from spring time in New South Wales where the roses in my garden are
blooming and very pretty.
Henry
New Technology for Proctoring Distance Education Examinations
"Proctor 2.0," by Elia Powers, Inside Higher Ed, June 2, 2006 ---
http://www.insidehighered.com/news/2006/06/02/proctor
Bob Jensen's threads on online versus onsite assessment are at
http://faculty.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline
Bob Jensen's threads on cheating are at
http://faculty.trinity.edu/rjensen/Plagiarism.htm
"Far From Honorable," by Steve Kolowich, Inside Higher Ed,
October 25, 2011 ---
http://www.insidehighered.com/news/2011/10/25/online-students-might-feel-less-accountable-honor-codes
Much of the urgency around creating a “sense of
community” in online courses springs from a desire to keep online students
from dropping out. But a recent paper suggests that strengthening a sense of
social belonging among online students might help universities fight another
problem: cheating.
In a series of experiments, researchers at Ohio
University found that students in fully online psychology courses who signed
an honor code promising not to cheat broke that pledge at a significantly
higher rate than did students in a “blended” course that took place
primarily in a classroom.
“The more distant students are, the more
disconnected they feel, and the more likely it is that they’ll rationalize
cheating,” Frank M. LoSchiavo, one of the authors, conjectured in an
interview with Inside Higher Ed.
While acknowledging the limitations inherent to a
study with such a narrow sample, and the fact that motivations are
particularly hard to pin down when it comes to cheating, LoSchiavo and Mark
A. Shatz, both psychology professors at Ohio University's Zanesville campus,
said their findings may indicate that meeting face-to-face with peers and
professors confers a stronger sense of accountability among students. “Honor
codes,” LoSchiavo said, “are more effective when there are [strong] social
connections.”
Honor codes are not, of course, the only method of
deterring cheating in online courses. The proliferation of online programs
has given rise to a
cottage industry of
remote proctoring technology, including one product that takes periodic
fingerprint readings while monitoring a student’s test-taking environment
with a 360-degree camera. (A 2010 survey by the Campus Computing Project
suggests that a minority of institutions authenticate the identities of
online students as a rule.)
But LoSchiavo said that he and Shatz were more
interested in finding out whether honor codes held any sway online. If so,
then online instructors might add pledges to their arsenal of anti-cheating
tools, LoSchiavo said. If not, it provides yet an intriguing contribution to
the discussion about student engagement and “perceived social distance” in
the online environment.
They experimented with the effectiveness of honor
codes in three introductory psychology courses at Ohio University. The first
course had 40 students and was completely online. These students, like those
in subsequent trials, were a mix of traditional-age and adult students,
mostly from regional campuses in the Ohio University system. There was no
honor code. Over the course of the term, the students took 14
multiple-choice quizzes with no proctoring of any kind. At the end of the
term, 73 percent of the students admitted to cheating on at least one of
them.
The second trial involved another fully online
introductory course in the same subject. LoSchiavo and Shatz divided the
class evenly into two groups of 42 students, and imposed an honor code --
posted online with the other course materials -- to one group but not the
other. The students “digitally signed the code during the first week of the
term, prior to completing any assignments.” The definition of cheating was
the same as in the first trial: no notes, no textbooks, no Internet, no
family or friends. There was no significant difference in the self-reported
cheating between the two groups.
In a third trial, the professors repeated the
experiment with 165 undergraduates in a “blended” course, where only 20
percent of the course was administered online and 80 percent in a
traditional classroom setting. Again, they split the students into two
groups: one in which they were asked to sign an honor code, and another in
which they were not.
This time, when LoSchiavo and Shatz surveyed the
students at the end of the term, there was a significant difference:
Students who promised not to cheat were about 25 percent less likely to
cheat than were those who made no such promise. Among the students who had
not signed the code, 82 percent admitted to cheating.
LoSchiavo concedes that this study offers no
definitive answers on the question of whether students are more likely to
cheat in fully online courses. Cheating is more often than not a crime of
opportunity, and containing integrity violations probably has much more to
do with designing a system that limits the opportunities to cheat and gives
relatively little weight to those assignments for which cheating is hardest
to police.
“The bottom line is that if there are
opportunities, students will cheat,” he said. “And the more opportunities
they have, the more cheating there will be, and it is incumbent upon
professors to put in a system that, when it’s important, cheating will be
contained.”
Continued in article
Jensen Comment
I think universities like Trinity University that expanded their honor codes to
include student courts are generally happy with the operations of those honor
codes. However, Trinity has only full time students and no distance education
courses.
One thing that I hated giving up was grading control. For most of my teaching
career I gave F grades to students who seriously cheated in my courses. Under
the revised Trinity Honor Code, instructors can no longer control the granting
of F grades for cheating.
When I was a student at Stanford the Honor Code included a pledge to report
cheating of other students. I think most universities have watered down this
aspect of their honor codes because, in this greatly increased era of
litigation, student whistle blowers can be sued big time. Universities may
continue to encourage such whistle blowing, but they no longer make students
sign pledges that on their honor they will be whistleblowers if they do not want
to bear the risk of litigation by students they report.
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Accounting Professors in Support of Online Testing That, Among Other
Things, Reduces Cheating
These same professors became widely known for their advocacy of self-learning in
place of lecturing
"In Support of the E-Test," by Elia Powers, Inside Higher Ed, August
29, 2007 ---
http://www.insidehighered.com/news/2007/08/29/e_test
Critics
of testing through the computer often argue that it’s
difficult to tell if students are doing their own work. It’s
also unclear to some professors whether using the technology
is worth their while.
A new study makes the argument that giving
electronic tests can actually reduce cheating and save
faculty time.
Anthony
Catanach Jr. and Noah Barsky, both associate professors of
accounting at the Villanova School of Business, came to that
conclusion after speaking with faculty members and analyzing
the responses of more than 100 students at Villanova and
Philadelphia University. Both Catanach and Barsky teach a
course called Principles of Managerial Accounting that
utilizes the WebCT Vista e-learning platform. The professors
also surveyed undergraduates at Philadelphia who took tests
electronically.
The
Villanova course follows a pattern of Monday lecture,
Wednesday case assignment, Friday assessment. The first two
days require in-person attendance, while students can check
in Friday from wherever they are.
“It never
used to make sense to me why at business schools you have
Friday classes,” Catanach said. “As an instructor it’s
frustrating because 30 percent of the class won’t show up,
so you have to redo material. We said, how can we make that
day not lose its effectiveness?”
The answer,
he and Barsky determined, was to make all electronically
submitted group work due on Fridays and have that be
electronic quiz day. That’s where academic integrity came
into play. Since the professors weren’t requiring students
to be present to take the exams, they wanted to deter
cheating. Catanach said programs like the one he uses
mitigate the effectiveness of looking up answers or
consulting friends.
In
electronic form, questions are given to students in random
order so that copying is difficult. Professors can change
variables within a problem to make sure that each test is
unique while also ensuring a uniform level of difficulty.
The programs also measure how much time a student spends on
each question, which could signal to an instructor that a
student might have slowed to use outside resources.
Backtracking on questions generally is not permitted.
Catanach said he doesn’t pay much attention to time spent on
individual questions. And since he gives his students a
narrow time limit to finish their electronic quizzes,
consulting outside sources would only lead students to be
rushed by the end of the exam, he added.
Forty-five
percent of students who took part in the study reported that
the electronic testing system reduced the likelihood of
their cheating during the course.
Stephen
Satris, director of the Center for Academic Integrity at
Clemson University, said he applauds the use of technology
to deter academic dishonesty. Students who take these
courses might think twice about copying or plagiarizing on
other exams, he said.
“It’s good
to see this program working,” Satris said. “It does an end
run around cheating.”
The report
also makes the case that both faculty and students save time
with e-testing. Catanach is up front about the initial time
investment: For instructors to make best use of the testing
programs, they need to create a “bank” of exam questions and
code them by topic, learning objectives and level of
difficulty. That way, the program knows how to distribute
questions. (He said instructors should budget roughly 10
extra hours per week during the course for this task.)
The payoff,
he said, comes later in the term. In the study, professors
reported recouping an average of 80 hours by using the
e-exams. Faculty don’t have to hand-grade tests (that often
being a deterrent for the Friday test, Catanach notes), and
graduate students or administrative staff can help prepare
the test banks, the report points out.
Since tests
are taken from afar, class time can be used for other
purposes. Students are less likely to ask about test results
during sessions, the study says, because the computer
program gives them immediate results and points to pages
where they can find out why their answers were incorrect.
Satris said this type of system likely dissuades students
from grade groveling, because the explanations are all there
on the computer. He said it also make sense in other ways.
“I like that
professors can truly say, ‘I don’t know what’s going to be
on the test. There’s a question bank; it’s out of my
control,’ ” he said.
And then
there’s the common argument about administrative efficiency:
An institution can keep a permanent electronic record of its
students.
Survey
results showed that Villanova students, who Catanach said
were more likely to have their own laptop computers and be
familiar with e-technology, responded better to the
electronic testing system than did students at Philadelphia,
who weren’t as tech savvy. Both Catanach and Satris said the
e-testing programs are not likely to excite English and
philosophy professors, whose disciplines call for essay
questions rather than computer-graded content.
From a
testing perspective, Catanach said the programs can be most
helpful for faculty with large classes who need to save time
on grading. That’s why the programs have proven popular at
community colleges in some of the larger states, he said.
“It works
for almost anyone who wants to have periodic assessment,” he
said. “How much does the midterm and final motivate students
to keep up with material? It doesn’t. It motivates cramming.
This is a tool to help students keep up with the material.”
August 29, 2007 reply from Stokes, Len
[stokes@SIENA.EDU]
I am also a strong proponent of active learning
strategies. I have the luxury of a small class size. Usually fewer than 30
so I can adapt my classes to student interaction and can have periodic
assessment opportunities as it fits the flow of materials rather than the
calendar. I still think a push toward smaller classes with more faculty face
time is better than computer tests. One lecture and one case day does not
mean active learning. It is better than no case days but it is still a
lecture day. I don’t have real lecture days every day involves some
interactive material from the students.
While I admit I can’t pick up all trends in grading
the tests, but I do pick up a lot of things so I have tendency to have a
high proportion of essays and small problems. I then try to address common
errors in class and also can look at my approach to teaching the material.
Len
Bob Jensen attempts to make a case that self learning is more effective
for metacognitive reasons ---
http://faculty.trinity.edu/rjensen/265wp.htm
This document features the research of Tony Catanach, David Croll, Bob
Grinaker, and Noah
Barsky.
Bob Jensen's threads on the myths of online education are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Myths
How a Student Laid Up With a Broken Back Learned From Free Open Sharing
Ivy League Courses
The big issue is how to get transcript credit for his
accomplishments?
The Year 1858
When the University of London instituted
correspondence courses in 1858, the first university to do so, its students
(typically expatriates in what were then the colonies of Australia, Canada,
India, New Zealand, and South Africa), discovered the programme by word of
mouth and wrote the university to enrol. the university then despatched, by
post-and-boat, what today we would call the course outline, a set of
previous examination papers and a list of places around the world where
examinations were conducted. It left any "learning" to the
hapless student, who
sat the examination whenever he or she felt ready: a truly "flexible"
schedule! this was the first generation of distance education (Tabsall and
Ryan, 1999): "independent" learning for highly motivated and resourceful
autodidacts disadvantaged by distance.
(Page 71)
Yoni Ryan who wrote Chapter 5 of
The Changing Faces of Virtual
Education ---
http://www.col.org/virtualed/
Dr. Glen Farrell, Study Team Leader and Editor
The Commonwealth of Learning
Of course students paid for correspondence courses and they got credit (often
they took exams proctored by the village vicar. In days of old, the University
of Chicago granted credit via onsite examination --- students did not have to
attend courses but had to pay for college degrees earned via examinations. In
modern times we usually insist that even online students do more for course
credits than merely passing examinations. Examples of other work that's graded
include term papers and team projects. which, of course, can be required of
online students in addition to examinations that might be administered at test
sites like Sylvan testing sites or community colleges that administer
examinations for major universities.
In modern times, countless courses are available online, often from very
prestigious universities for credit for students admitted to online programs.
Courses from prestigious universities are also free to anybody in the world, but
these almost never award degree credits since examinations and projects are not
administered and graded. For links to many of the prestigious university course
materials, videos lectures, and complete courses go to
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
One Business Model from Harvard
The Harvard Business School has a basic accounting course that can be purchased
and administered online by other colleges. Of course the credits granted are
from College X and not Harvard such that College X must provide instructors for
coordinating the course and administering the examinations and projects.
Financial Accounting: An Introductory Online Course by David F. Hawkins, Paul M.
Healy, Michael Sartor Publication date: Nov 04, 2005. Prod. #: 105708-HTM-ENG
http://harvardbusiness.org/product/financial-accounting-an-introductory-online-course/an/105708-HTM-ENG?Ntt=Basic+Accounting
"Open Courses: Free, but Oh, So Costly: Online students want credit;
colleges want a working business model," by Marc Parry, Chronicle of
Higher Education, October 11, 2009 ---
Click Here
http://chronicle.com/article/Free-Online-Courses-at-a-Very/48777/?sid=wb&utm_source=wb&utm_medium=en
Steven T. Ziegler leapt to MIT off a mountain.
He was on a hang glider, and he slammed the ground
hard on his chin. Recovery from surgery on his broken back left the
39-year-old high-school dropout with time for college courses.
From a recliner, the drugged-up crash victim tried
to keep his brain from turning to mush by watching a free
introductory-biology course put online by the Massachusetts Institute of
Technology. Hooked, he moved on to lectures about Cormac McCarthy's novel
Blood Meridian from an English course at Yale. Then he bought Paradise Lost.
A success for college-made free online
courses—except that Mr. Ziegler, who works for a restaurant-equipment
company in Pennsylvania, is on the verge of losing his job. And those
classes failed to provide what his résumé real ly needs: a college
credential.
"Do I put that I got a 343 out of 350 on my GED
test at age 16?" he says, throwing up his hands. "I have nothing else to
put."
Related ContentCountries Offer Different Takes to
Open Online Learning Students Find Free Online Lectures Better Than What
They're Paying For Table: How 4 Colleges Support Free Online Courses Video:
A Family Man Dabbles in Ivy-League Learning Enlarge Photo Stan Godlewski At
Yale U., technicians record John Geanakoplos, a professor of economics,
giving a lecture that will be available free online. Stan Godlewski At Yale
U., technicians record John Geanakoplos, a professor of economics, giving a
lecture that will be available free online. Enlarge Photo John Zeedick
Steven Ziegler cooking dinner at home with his family. John Zeedick Steven
Ziegler cooking dinner at home with his family. Colleges, too, are grappling
with the limits of this global online movement. Enthusiasts think open
courses have the potential to uplift a nation of Zieglers by helping them
piece together cheaper degrees from multiple institutions. But some worry
that universities' projects may stall, because the recession and
disappearing grant money are forcing colleges to confront a difficult
question: What business model can support the high cost of giving away your
"free" content?
"With the economic downturn, I think it will be a
couple of years before Yale or other institutions are likely to be able to
make substantial investments in building out a digital course catalog," says
Linda K. Lorimer, vice president and secretary at Yale, which is publishing
a 36-class, greatest-hits-style video set called Open Yale Courses. Over the
long term, she argues, such work will flourish.
Maybe. But Utah State University recently
mothballed its OpenCourseWare venture after running out of money from the
state and from the William and Flora Hewlett Foundation, which has financed
much of the open-content movement. Utah State had published a mix of lecture
notes, syllabi, audio and video recordings from more than 80 courses, a
collection thought to be the country's second-largest behind the pioneering,
1,940-class MIT OpenCourseWare project. The program needed only $120,000 a
year to survive. But the economy was so bad that neither the university nor
the state Legislature would pony up more money for a project whose mission
basically amounted to blessing the globe with free course materials.
'Dead by 2012' More free programs may run aground.
So argues David Wiley, open education's Everywhere Man, who set up the Utah
venture and is now an associate professor of instructional psychology and
technology at Brigham Young University. A newspaper once likened him to
Nostradamus for claiming that universities risked irrelevance by 2020. The
education oracle offers another prophecy for open courseware. "Every OCW
initiative at a university that does not offer distance courses for credit,"
he has blogged, "will be dead by the end of calendar 2012."
In other words: Nice knowing you, MIT
OpenCourseWare. So long, Open Yale Courses.
"I think the economics of open courseware the way
we've been doing it for the last almost decade have been sort of wrong," Mr.
Wiley tells The Chronicle. Projects aimed for "the world," not
bread-and-butter clientele like alumni and students. "Because it's not
connected to any of our core constituencies, those programs haven't been
funded with core funding. And so, in a climate where the economy gets bad
and foundation funding slows, then that's a critical juncture for the
movement."
Stephen E. Carson, external-relations director of
MIT's OpenCourseWare, chuckles at the 2012 prediction and chides Mr. Wiley
as someone who "specializes in provocative statements." But ventures around
the country are seriously exploring new business strategies. For some, it's
fund raising à la National Public Radio; for others, hooking open content to
core operations by dangling it as a gateway to paid courses.
For elite universities, the sustainability struggle
points to a paradox of opening access. If they do grant credentials, perhaps
even a certificate, could that dilute their brands?
"Given that exclusivity has come to be seen by some
as a question of how many students a university can turn away, I don't see
what's going to make the selective universities increase their appetite for
risking their brands by offering credits for online versions of core
undergraduate courses," says Roger C. Schonfeld, research manager at Ithaka
S+R, a nonprofit group focused on technology in higher education that is
studying online courseware.
The answer may be that elites won't have to. Others
can.
Ever since MIT made its curriculum freely available
online, its philanthropic feat has become a global trend. Colleges compete
to add new classes to the Web's ever-growing free catalog. The result is a
world where content and credentials no longer need to come from the same
source. A freshman at Podunk U. can study with the world's top professors on
YouTube. And within the emerging megalibrary of videos and syllabi and
multimedia classes—a library of perhaps 10,000 courses—proponents see the
building blocks of cheaper college options for self-teachers like Mr.
Ziegler.
The Great Unbundling How? When open-education
advocates like MIT's Mr. Carson peer into their crystal balls, the images
they see often hinge on one idea: the unbundling of higher education.
The Great Higher Education Unbundling notion is
over a decade old. It's picked up buzz lately, though, as media commentators
compare the Internet's threat to college "conglomerates" with the way Web
sites like Craigslist clawed apart the traditional functions of newspapers.
Now take a university like MIT, where students pay
about $50,000 a year for a tightly knit package of course content, learning
experiences, certification, and social life. MIT OpenCourseWare has lopped
off the content and dumped it in cyberspace. Eventually, according to Mr.
Carson's take on the unbundling story, online learning experiences will
emerge that go beyond just content. Consider Carnegie Mellon University's
Open Learning Initiative, another darling of the movement, whose multimedia
courses track students' progress and teach them with built-in tutors—no
professor required.
"And then, ultimately, I think there will be
increasing opportunities in the digital space for certification as well,"
Mr. Carson says. "And that those three things will be able to be flexibly
combined by savvy learners, to achieve their educational goals at relatively
low cost."
And social life? Don't we need college to tailgate
and mate?
"Social life we'll just forget about because
there's Facebook," Mr. Wiley says. "Nobody believes that people have to go
to university to have a social life anymore."
Genre-Benders If the paragraphs you just read
triggered an it'll-never-happen snort, take a look at what futurists like
Mr. Wiley are trying—today—on the margins of academe.
In August a global group of graduate students and
professors went live with an online book-club-like experiment that layers
the flesh of human contact on the bones of free content. At Peer 2 Peer
University, course organizers act more like party hosts than traditional
professors. Students are expected to essentially teach one another, and
themselves.
In September a separate institution started that
also exploits free online materials and peer teaching. At University of the
People, 179 first-term freshmen are already taking part in a project that
bills itself as the world's first nonprofit, tuition-free, online
university.
Continued in article
Bob Jensen's threads on open sharing videos, lectures and course materials
available free from prestigious universities ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Bob Jensen's threads on online assessment for grading and course credit
---
http://faculty.trinity.edu/rjensen/assess.htm#OnlineOffCampus
Bob Jensen's threads on online training and education alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
Barbara gave me permission to post the following message on March 15, 2006
My reply follows her message.
Professor Jensen:
I need your help in working with regulators who are
uncomfortable with online education.
I am currently on the faculty at the University of
Dallas in Irving, Texas and I abruptly learned yesterday that the Texas
State Board of Public Accountancy distinguishes online and on campus
offering of ethics courses that it approves as counting for students to meet
CPA candidacy requirements. Since my school offers its ethics course in both
modes, I am suddenly faced with making a case to the TSBPA in one week's
time to avoid rejection of the online version of the University of Dallas
course.
I have included in this email the "story" as I
understand it that explains my situation. It isn't a story about accounting
or ethics, it is a story about online education.
I would like to talk to you tomorrow because of
your expertise in distance education and involvement in the profession. In
addition, I am building a portfolio of materials this week for the Board
meeting in Austin March 22-23 to make a case for their approval (or at least
not rejection) of the online version of the ethics course that the Board
already accepts in its on campus version. I want to include compelling
research-based material demonstrating the value of online learning, and I
don't have time to begin that literature survey myself. In addition, I want
to be able to present preliminary results from reviewers of the University
of Dallas course about the course's merit in presentation of the content in
an online delivery.
Thank you for any assistance that you can give me.
Barbara W. Scofield
Associate Professor of Accounting
University of Dallas
1845 E Northgate Irving, TX 75062
972-721-5034
scofield@gsm.udallas.edu
A statement of the University of Dallas and Texas
State Board of Public Accountancy and Online Learning
The TSBPA approved the University of Dallas ethics
program in 2004. The course that was approved was a long-standing course,
required in several different graduate programs, called Business Ethics. The
course was regularly taught on campus (since 1995) and online (since 2001).
The application for approval of the ethics course
did not ask for information about whether the class was on campus or online
and the syllabus that was submitted happened to be the syllabus of an on
campus section. The TSBPA's position (via Donna Hiller) is that the Board
intended to approve only the on campus version of the course, and that the
Board inferred it was an on campus course because the sample syllabus that
was submitted was an on campus course.
Therefore the TSBPA (via Donna Hiller) is requiring
that University of Dallas students who took the online version of the ethics
course retake the exact same course in its on campus format. While the TSBPA
(via Donna Hiller) has indicated that the online course cannot at this time
be approved and its scheduled offering in the summer will not provide
students with an approved course, Donna Hiller, at my request, has indicated
that she will take this issue to the Board for their decision next week at
the Executive Board Meeting on March 22 and the Board Meeting on March 23.
There are two issues:
1. Treatment of students who were relying on
communication from the Board at the time they took the class that could
reasonably have been interpreted to confer approval of both the online and
on campus sections of the ethics course.
2. Status of the upcoming summer online ethics
class.
My priority is establishing the status of the
upcoming summer online ethics class. The Board has indicated through its
pilot program with the University of Texas at Dallas that there is a place
for online ethics classes in the preparation of CPA candidates. The
University of Dallas is interested in providing the TSBPA with any
information or assessment necessary to meet the needs of the Board to
understand the online ethics class at the University of Dallas. Although not
currently privy to the Board specific concerns about online courses, the
University of Dallas believes that it can demonstrate sufficient credibility
for the course because of the following factors:
A. The content of the online course is the same as
the on campus course. Content comparison can be provided. B. The
instructional methods of the online course involve intense
student-to-student, instructor-to-student, and student-to-content
interaction at a level equivalent to an on campus course. Empirical
information about interaction in the course can be provided.
C. The instructor for the course is superbly
qualified and a long-standing ethics instructor and distance learning
instructor. The vita of the instructor can be provided.
D. There are processes for course assessment in
place that regularly prompt the review of this course and these assessments
can be provided to the board along with comparisons with the on campus
assessments.
E. The University of Dallas will seek to coordinate
with the work done by the University of Texas at Dallas to provide
information at least equivalent to that provided by the University of Texas
at Dallas and to meet at a minimum the tentative criteria for online
learning that UT Dallas has been empowered to recommend to the TSBPA.
Contact with the University of Texas at Dallas has been initiated.
When the online ethics course is granted a path to
approval by the Board, I am also interested in addressing the issue of TSBPA
approval of students who took the class between the original ethics course
approval date and March 13, 2006, the date that the University of Dallas
became aware of the TSBPA intent (through Donna Hiller) that the TSBPA
distinguished online and on campus ethics classes.
The University of Dallas believes that the online
class in fact provided these students with a course that completely
fulfilled the general intent of the Board for education in ethics, since it
is the same course as the approved on campus course (see above). The
decision on the extent of commitment of the Board to students who relied on
the Board's approval letter may be a legal issue of some sort that is
outside of the current decision-making of the Board, but I want the Board
take the opportunity to consider that the reasonableness of the students'
position and the students' actual preparation in ethics suggest that there
should also be a path created to approval of online ethics courses taken at
the University of Dallas during this prior time period. The currently
proposed remedy of a requirement for students to retake the very same course
on campus that students have already taken online appears excessively costly
to Texans and the profession of accounting by delaying the entry of
otherwise qualified individuals into public accountancy. High cost is
justified when the concomitant benefits are also high. However, the benefit
to Texans and the accounting profession from students who retake the ethics
course seems to exist only in meeting the requirements of regulations that
all parties diligently sought to meet in the first place and not in
producing any actual additional learning experiences.
A reply to her from Bob Jensen
Hi
Barbara,
May I
share your questions and my responses in the next edition of New
Bookmarks? This might be helpful to your efforts when others become
informed. I will be in my office every day except for March 17. My phone
number is 210-999-7347. However, I can probably be more helpful via
email.
As
discouraging as it may seem, if students know what is expected of them
and must demonstrate what they have learned, pedagogy does not seem to
matter. It can be online or onsite. It can be lecture or cases. It can
be no teaching at all if there are talented and motivated students who
are given great learning materials. This is called the well-known “No
Significant Difference” phenomenon ---
http://www.nosignificantdifference.org/
I think
you should stress that insisting upon onsite courses is discriminatory
against potential students whose life circumstances make it difficult or
impossible to attend regular classes on campus.
I think
you should make the case that online education is just like onsite
education in the sense that learning depends on the quality and
motivations of the students, faculty, and university that sets the
employment and curriculum standards for quality. The issue is not onsite
versus online. The issue is quality of effort.
The most
prestigious schools like Harvard and Stanford and Notre Dame have a
large number of credit and non-credit courses online. Entire accounting
undergraduate and graduate degree programs are available online from
such quality schools as the University of Wisconsin and the University
of Maryland. See my guide to online training and education programs is
at
http://faculty.trinity.edu/rjensen/crossborder.htm
My main
introductory document on the future of distance education is at
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm
Anticipate and deal with the main arguments against online education.
The typical argument is that onsite students have more learning
interactions with themselves and with the instructor. This is absolutely
false if the distance education course is designed to promote online
interactions that do a better job of getting into each others’ heads.
Online courses become superior to onsite courses.
Amy
Dunbar teaches intensely interactive online courses with Instant
Messaging. See Dunbar, A. 2004. “Genesis of an Online Course.” Issues in
Accounting Education (2004),19 (3):321-343.
ABSTRACT:
This paper presents a descriptive and evaluative analysis of the
transformation of a face-to-face graduate tax accounting course to an
online course. One hundred fifteen students completed the compressed
six-week class in 2001 and 2002 using WebCT, classroom environment
software that facilitates the creation of web-based educational
environments. The paper provides a description of the required
technology tools and the class conduct. The students used a combination
of asynchronous and synchronous learning methods that allowed them to
complete the coursework on a self-determined schedule, subject to
semi-weekly quiz constraints. The course material was presented in
content pages with links to Excel® problems, Flash examples, audio and
video files, and self-tests. Students worked the quizzes and then met in
their groups in a chat room to resolve differences in answers. Student
surveys indicated satisfaction with the learning methods.
I might
add that Amy is a veteran world class instructor both onsite and online.
She’s achieved all-university awards for onsite teaching in at least
three major universities. This gives her the credentials to judge how
well her online courses compare with her outstanding onsite courses.
A free
audio download of a presentation by Amy Dunbar is available at
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002
The
argument that students cannot be properly assessed for learning online
is more problematic. Clearly it is easier to prevent cheating with
onsite examinations. But there are ways of dealing with this problem.
My best example of an online graduate program that is extremely
difficult is the Chartered Accountant School of Business (CASB) masters
program for all of Western Canada. Students are required to take some
onsite testing even though this is an online degree program. And CASB
does a great job with ethics online. I was engaged to formally assess
this program and came away extremely impressed. My main contact there is
Don Carter
carter@casb.com . If you are really serious about this, I would
invite Don to come down and make a presentation to the Board. Don will
convince them of the superiority of online education.
You can
read some about the CASB degree program at
http://www.casb.com/
You can
read more about assessment issues at
http://faculty.trinity.edu/rjensen/assess.htm
I think a
lot of the argument against distance education comes from faculty
fearful of one day having to teach online. First there is the fear of
change. Second there is the genuine fear that is entirely justified ---
if online teaching is done well it is more work and strain than onsite
teaching. The strain comes from increased hours of communication with
each and every student.
Probably
the most general argument in favor of onsite education is that students
living on campus have the social interactions and maturity development
outside of class. This is most certainly a valid argument. However, when
it comes to issues of learning of course content, online education can
be as good as or generally better than onsite classes. Students in
online programs are often older and more mature such that the on-campus
advantages decline in their situations. Online students generally have
more life, love, and work experiences already under their belts. And
besides, you’re only talking about ethics courses rather than an entire
undergraduate or graduate education.
I think
if you deal with the learning interaction and assessment issues that you
can make a strong case for distance education. There are some “dark
side” arguments that you should probably avoid. But if you care to read
about them, go to
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Bob Jensen
March 15,
2006 reply from Bruce Lubich
[BLubich@UMUC.EDU]
Bob, as a director and teacher in a graduate
accounting program that is exclusively online, I want to thank you for your
support and eloquent defense of online education. Unfortunately, Texas's
predisposition against online teaching also shows up in its education
requirements for sitting for the CPA exam. Of the 30 required upper division
accounting credits, at least 15 must "result from physical attendance at
classes meeting regularly on the campus" (quote from the Texas State Board
of Public Accountancy website at www.tsbpa.state.tx.us/eq1.htm)
Cynically speaking, it seems the state of Texas
wants to be sure its classrooms are occupied.
Barbara, best of luck with your testimony.
Bruce Lubich
Program Director,
Accounting Graduate School of Management and Technology
University of Maryland University College
March 15, 2006 reply from David Albrecht
[albrecht@PROFALBRECHT.COM]
At my school, Bowling Green, student credits for
on-line accounting majors classes are never approved by the department
chair. He says that you can't trust the schools that are offering these.
When told that some very reputable schools are offering the courses, he
still says no because when the testing process is done on-line or not in the
physical presence of the professor the grades simply can't be trusted.
David Albrecht
March 16, 2006 reply from Bob Jensen
Hi David,
One tack against a luddites like that is to propose a compromise that
virtually accepts all transfer credits from AACSB-accredited universities.
It's difficult to argue that standards vary between online and onsite
courses in a given program accredited by the AACSB. I seriously doubt that
the faculty in that program would allow a double academic standard.
In fact, on transcripts it is often impossible to distinguish online from
onsite credits from a respected universities, especially when the same
course is offered online and onsite (i.e., merely in different sections).
You might explain to your department chair that he's probably been
accepting online transfer credits for some time. The University of North
Texas and other major universities now offer online courses to full-time
resident students who live on campus. Some students and instructors find
this to be a better approach to learning.
And you ask him why Bowling Green's assessment rigor is not widely known
to be vastly superior to online courses from nearly all major universities
that now offer distance education courses and even total degree programs,
including schools like the Fuqua Graduate School at Duke, Stanford
University (especially computer science and engineering online courses that
bring in over $100 million per year), the University of Maryland, the
University of Wisconsin, the University of Texas, Texas Tech, and even,
gasp, The Ohio State University.
You might tell your department chair that by not offering some online
alternatives, Bowling Green is not getting the most out of its students. The
University of Illinois conducted a major study that found that students
performed better in online versus onsite courses when matched pair sections
took the same examinations.
And then you might top it off by asking your department chair how he
justifies denying credit for Bowling Green's own distance education courses
---
http://adultlearnerservices.bgsu.edu/index.php?x=opportunities
The following is a quotation from the above Bowling Green site:
*****************************
The advancement of computer technology has
provided a wealth of new opportunities for learning. Distance education
is one example of technology’s ability to expand our horizons and gain
from new experiences. BGSU offers many distance education courses and
two baccalaureate degree completion programs online.
The Advanced Technological Education Degree
Program is designed for individuals who have completed a two-year
applied associate’s degree. The Bachelor of Liberal Studies Degree
Program is ideal for students with previous college credit who would
like flexibility in course selection while completing a liberal
education program.
Distance Education Courses and Programs ---
http://ideal.bgsu.edu/ONLINE/
***************************
Bob Jensen
March 16, 2006 reply from Amy Dunbar
[Amy.Dunbar@BUSINESS.UCONN.EDU]
Count me in the camp that just isn't that concerned
about online cheating. Perhaps that is because my students are graduate
students and my online exams are open-book, timed exams, and a different
version is presented to each student (much like a driver's license exam). In
my end-of-semester survey, I ask whether students are concerned about
cheating, and on occasion, I get one who is. But generally the response is
no.
The UConn accounting department was just reviewed
by the AACSB, and they were impressed by our MSA online program. They
commented that they now believed that an online MSA program was possible. I
am convinced that the people who are opposed to online education are
unwilling to invest the time to see how online education is implemented.
Sure there will be bad examples, but there are bad examples of face to face
(FTF) teaching. How many profs do you know who simply read powerpoint slides
to a sleeping class?! Last semester, I received the School of Business
graduate teaching award even though I teach only online classes. I believe
that the factor that really matters is that the students know you care about
whether they are learning. A prof who cares interacts with students. You can
do that online as well as FTF.
Do I miss FTF teaching -- you bet I do. But once I
focused on what the student really needs to learn, I realized, much to my
dismay, interacting FTF with Dunbar was not a necessary condition.
Amy Dunbar
March 16, 2006 message from Carol Flowers
[cflowers@OCC.CCCD.EDU]
To resolve this issue and make me
more comfortable with the grade a student earns, I have all my online exams
proctored. I schedule weekends (placing them in the schedule of classes) and
it is mandatory that they take the exams during this weekend period
(Fir/Sat) at our computing center. It is my policy that if they can't take
the paced exams during those periods, then the class is not one that they
can participate in. This is no different from having different times that
courses are offered. They have to make a choice in that situation, also, as
to which time will best serve their needs.
March 16, 2006 reply from David Fordham, James Madison
University
[fordhadr@JMU.EDU]
Our model is similar to Carol Flowers. Our on-line
MBA program requires an in-person meeting for four hours at the beginning of
every semester, to let the students and professor get to know each other
personally, followed by the distance-ed portion, concluding with another
four-hour in- person session for the final examination or other assessment.
The students all congregate at the Sheraton at Dulles airport, have dinner
together Friday night, spend Saturday morning taking the final for their
previous class, and spend Saturday afternoon being introduced to their next
class. They do this between every semester. So far, the on- line group has
outperformed (very slightly, and not statistically significant due to small
sample sizes) the face-to-face counterparts being used as our control
groups. We believe the outperformance might have an inherent self- selection
bias since the distance-learners are usually professionals, whereas many of
our face-to-face students are full-time students and generally a bit younger
and more immature.
My personal on-line course consists of exactly the
same readings as my F2F class, and exactly the same lectures (recorded using
Tegrity) provided on CD and watched asynchronously, followed by on-line
synchronous discussion sessions (2-3 hours per week) where I call on random
students asking questions about the readings, lectures, etc., and engaging
in lively discussion. I prepare some interesting cases and application
dilemmas (mostly adapted from real world scenarios) and introduce dilemmas,
gray areas, controversy (you expected maybe peace and quiet from David
Fordham?!), and other thought-provoking issues for discussion. I have almost
perfect attendance in the on-line synchronous because the students really
find the discussions engaging. Surprisingly, I have no problem with
freeloaders who don't read or watch the recorded lectures. My major student
assessment vehicle is an individual policy manual, supplemented by the
in-person exam. Since each student's manual organization, layout, approach,
and perspective is so very different from the others, cheating is almost out
of the question. And the in-person exam is conducted almost like the CISP or
old CPA exams... total quiet, no talking, no leaving the room, nothing but a
pencil, etc.
And finally, no, you can't tell the difference on
our student's transcript as to whether they took the on-line or in-person
MBA. They look identical on the transcript.
We've not yet had any problem with anyone
"rejecting" our credential that I'm aware of.
Regarding our own acceptance of transfer credit, we
make the student provide evidence of the quality of each course (not the
degree) before we exempt or accept credit. We do not distinguish between
on-line or F2F -- nor do we automatically accept a course based on
institution reputation. We have on many occasions rejected AACSB- accredited
institution courses (on a course-by-course basis) because our investigation
showed that the course coverage or rigor was not up to the standard we
required. (The only "blanket" exception that we make is for certain familiar
Virginia community college courses in the liberal studies where history has
shown that the college and coursework reliably meets the standards -- every
other course has to be accepted on a course-by-course basis.)
Just our $0.02 worth.
David Fordham
James Madison University
DOES DISTANCE LEARNING WORK?
A LARGE SAMPLE, CONTROL GROUP STUDY OF STUDENT SUCCESS IN DISTANCE LEARNING
by
James Koch ---
http://www.usq.edu.au/electpub/e-jist/docs/vol8_no1/fullpapers/distancelearning.htm
The relevant public policy question is this---Does
distance learning "work" in the sense that students experience as least as
much success when they utilize distance learning modes as compared to when
they pursue conventional bricks and mortar education? The answer to this
question is a critical in determining whether burgeoning distance learning
programs are cost-effective investments, either for students, or for
governments.
Of course, it is difficult to measure the
"learning" in distance learning, not the least because distance learning
courses now span nearly every academic discipline. Hence, most large sample
evaluative studies utilize students’ grades as an imperfect proxy for
learning. That approach is followed in the study reported here, as well.
A recent review of research in distance education
reported that 1,419 articles and abstracts appeared in major distance
education journals and as dissertations during the 1990-1999 period (Berge
and Mrozowski, 2001). More than one hundred of these studies focused upon
various measures of student success (such as grades, subsequent academic
success, and persistence) in distance learning courses. Several asked the
specific question addressed in this paper: Why do some students do better
than others, at least as measured by the grade they receive in their
distance learning course? A profusion of contradictory answers has emanated
from these studies (Berge and Mrozowski, 2001; Machtmes and Asher, 2000). It
is not yet clear how important to individual student success are factors
such as the student’s characteristics (age, ethnic background, gender,
academic background, etc.). However, other than knowing that experienced
faculty are more effective than less experienced faculty (Machtmes and
Asher, 2000), we know even less about how important the characteristics of
distance learning faculty are to student success, particularly where
televised, interactive distance learning is concerned.
Perhaps the only truly strong conclusion emerging
from previous empirical studies of distance learning is the oft cited "no
significant difference" finding (Saba, 2000). Indeed, an entire web site,
http://teleeducation.nb.ca/nosignificantdifference, exists that reports 355
such "no significant difference" studies. Yet, without quarreling with such
studies, they do not tell us why some students achieve better grades than
others when they utilize distance learning.
Several studies have suggested that student
learning styles and receptivity to distance learning influence student
success (see Taplin and Jegede, 2001, for a short survey). Unfortunately, as
Maushak et. al. (2001) point out, these intuitively sensible findings are
not yet highly useful, because they are not based upon large sample, control
group evidence that relates recognizable student learning styles to student
performance. Studies that rely upon "conversation and discourse analysis"
(Chen and Willits, 1999, provide a representative example) and interviews
with students are helpful, yet are sufficiently anecdotal that they are
unlikely to lead us to scientifically based conclusions about what works and
what does not.
This paper moves us several steps forward in terms
of our knowledge by means of a very large distance education sample (76,866
individual student observations) and an invaluable control group of students
who took the identical course at the same time from the same instructor, but
did so "in person" in a conventional "bricks and mortar" location. The
results indicate that gender, age, ethnic background, distance learning
experience, experience with the institution providing the instruction, and
measures of academic aptitude and previous academic success are
statistically significant determinants of student success. Similarly,
faculty characteristics such as gender, age, ethnic background, and
educational background are statistically significant predictors of student
success, though not necessarily in the manner one might hypothesize.
Continued in this working paper
January 6, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
No Significant Difference Phenomenon website
http://www.nosignificantdifference.org/
The website is a companion piece to Thomas L.
Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of
355 research reports, summaries, and papers that document no significant
differences in student outcomes between alternate modes of education
delivery.
DISTANCE LEARNING AND FACULTY CONCERNS
Despite the growing number of distance learning
programs, faculty are often reluctant to move their courses into the online
medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE
JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005)
Jennifer McLean discusses several areas that influence faculty resistance,
including: the perception that technical support and training is lacking,
the fear of being replaced by technology, and the absence of a
clearly-understood institutional vision for distance learning. The paper is
available online at
http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm
The Online Journal of Distance Learning
Administration is a free, peer-reviewed quarterly published by the Distance
and Distributed Education Center, The State University of West Georgia, 1600
Maple Street, Carrollton, GA 30118 USA; Web:
http://www.westga.edu/~distance/jmain11.html
Bob Jensen's threads on faculty concerns are at
http://faculty.trinity.edu/rjensen/assess.htm
Also see Bob Jensen's threads on the dark side at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
.QUESTIONING THE VALUE OF LEARNING
TECHNOLOGY
"The notion that the future of education lies
firmly in learning technology, seen as a tool of undoubted magnitude and a
powerful remedy for many educational ills, has penetrated deeply into the
psyche not only of those involved in delivery but also of observers,
including those in power within national governments." In a paper published
in 1992, Gabriel Jacobs expressed his belief that hyperlink technology would
be a "teaching resource that would transform passive learners into active
thinkers." In "Hypermedia and Discovery Based Learning: What Value?"
(AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, vol. 21, no. 3, 2005, pp.
355-66), he reconsiders his opinions, "the result being that the guarded
optimism of 1992 has turned to a deep pessimism." Jacob's paper is available
online at
http://www.ascilite.org.au/ajet/ajet21/jacobs.html .
The Australasian Journal of Educational Technology
(AJET) [ISSN 1449-3098 (print), ISSN 1449-5554 (online)], published three
times a year, is a refereed journal publishing research and review articles
in educational technology, instructional design, educational applications of
computer technologies, educational telecommunications, and related areas.
Back issues are available on the Web at no cost. For more information and
back issues go to
http://www.ascilite.org.au/ajet/ajet.html .
See Bob Jensen's threads on the dark side at
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
June 1, 2007 message from Carolyn Kotlas
[kotlas@email.unc.edu]
TEACHING THE "NET GENERATION"
The April/May 2007 issue of INNOVATE explores and
explains the learning styles and preferences of Net Generation learners.
"Net Generation learners are information seekers, comfortable using
technology to seek out information, frequently multitasking and using
multiple forms of media simultaneously. As a result, they desire
independence and autonomy in their learning processes."
Articles include:
"Identifying the Generation Gap in Higher
Education: Where Do theDifferences Really Lie?"
by Paula Garcia and Jingjing Qin, Northern Arizona University
"MyLiteracies: Understanding the Net Generation
through LiveJournals and Literacy Practices"
by Dana J. Wilber, Montclair State University
"Is Education 1.0 Ready for Web 2.0 Students?"
by John Thompson,Buffalo State College
The issue is available at
http://innovateonline.info/index.php.
Registration is required to access articles;
registration is free.
Innovate: Journal of Online Education [ISSN
1552-3233], an open-access, peer-reviewed online journal, is published
bimonthly by the Fischler School of Education and Human Services at Nova
Southeastern University.
The journal focuses on the creative use of
information technology (IT) to enhance educational processes in academic,
commercial, and governmental settings. For more information, contact James
L. Morrison, Editor-in-Chief; email:
innovate@nova.edu ;
Web: http://innovateonline.info/.
The journal also sponsors Innovate-Live webcasts
and discussion forums that add an interactive component to the journal
articles. To register for these free events, go to
http://www.uliveandlearn.com/PortalInnovate/.
See also:
"Motivating Today's College Students"
By Ian Crone
PEER REVIEW, vol. 9, no. 1, Winter 2007
http://www.aacu.org/peerreview/pr-wi07/pr-wi07_practice.cfm
Peer Review, published quarterly by the Association
of American Colleges and Universities (AACU), provides briefings on
"emerging trends and key debates in undergraduate liberal education. Each
issue is focused on a specific topic, provides comprehensive analysis, and
highlights changing practice on diverse campuses." For more information,
contact: AACU, 1818 R Street NW, Washington, DC 20009 USA;
tel: 202-387-3760; fax: 202-265-9532;
Web:
http://www.aacu.org/peerreview/.
For a perspective on educating learners on the
other end of the generational continuum see:
"Boomer Reality"
By Holly Dolezalek
TRAINING, vol. 44, no. 5, May 2007
http://www.trainingmag.com/msg/content_display/publications/e3if330208bec8f4014fac339db9fd0678e
Training [ISSN 0095-5892] is published monthly by
Nielsen Business Media, Inc., 770 Broadway, New York, NY 10003-9595 USA;
tel: 646-654-4500; email:
bmcomm@nielsen.com ;
Web: http://www.trainingmag.com.
Bob Jensen's threads on learning can be found at the following Web sites:
http://faculty.trinity.edu/rjensen/assess.htm
http://faculty.trinity.edu/rjensen/255wp.htm
http://faculty.trinity.edu/rjensen/265wp.htm
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
June 1, 2007 message from Carolyn Kotlas
[kotlas@email.unc.edu]
TECHNOLOGY AND CHANGE IN EDUCATIONAL PRACTICE
"Even if research shows that a particular
technology supports a certain kind of learning, this research may not reveal
the implications of implementing it. Without appropriate infrastructure or
adequate provisions of services (policy); without the facility or ability of
teachers to integrate it into their teaching practice (academics); without
sufficient support from technologists and/or educational technologists
(support staff), the likelihood of the particular technology or software
being educationally effective is questionable."
The current issue (vol. 19, no. 1, 2007) of the
JOURNAL OF EDUCATIONAL TECHNOLOGY & SOCIETY presents a selection of papers
from the Conference Technology and Change in Educational Practice which was
held at the London Knowledge Lab, Institute of Education, London in October
2005.
The papers cover three areas: "methodological
frameworks, proposing new ways of structuring effective research; empirical
studies, illustrating the ways in which technology impacts the working roles
and practices in Higher Education; and new ways of conceptualising
technologies for education."
Papers include:
"A Framework for Conceptualising the Impact of
Technology on Teaching and Learning"
by Sara Price and Martin Oliver, London Knowledge Lab, Institute of
Education
"New and Changing Teacher Roles in Higher Education
in a Digital Age"
by Jo Dugstad Wake, Olga Dysthe, and Stig Mjelstad, University of Bergen
"Academic Use of Digital Resources: Disciplinary
Differences and the Issue of Progression Revisited"
by Bob Kemp, Lancaster University, and Chris Jones, Open University
"The Role of Blogs In Studying the Discourse and
Social Practices of Mathematics Teachers"
by Katerina Makri and Chronis Kynigos, University of Athens
The issue is available at
http://www.ifets.info/issues.php?show=current.
The Journal of Educational Technology and Society
[ISSN 1436-4522]is a peer-reviewed, quarterly publication that "seeks
academic articles on the issues affecting the developers of educational
systems and educators who implement and manage such systems." Current and
back issues are available at
http://www.ifets.info/. The
journal is published by the International Forum of Educational Technology &
Society. For more information, see
http://ifets.ieee.org/.
Bob Jensen's threads on blogs and listservs are at
http://faculty.trinity.edu/rjensen/ListservRoles.htm
Bob Jensen's threads on education technologies are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Bob Jensen's threads on distance education and training alternatives are at
http://faculty.trinity.edu/rjensen/Crossborder.htm
Civil Rights Groups That Favor Standardized Testing
"Teachers and Rights Groups Oppose Education Measure ," by Diana Jean Schemo,
The New York Times, September 11, 2007 ---
http://www.nytimes.com/2007/09/11/education/11child.html?_r=1&oref=slogin
The draft House bill to renew the federal No Child
Left Behind law came under sharp attack on Monday from civil rights groups
and the nation’s largest teachers unions, the latest sign of how difficult
it may be for Congress to pass the law this fall.
At a marathon hearing of the House Education
Committee, legislators heard from an array of civil rights groups, including
the Citizens’ Commission on Civil Rights, the National Urban League, the
Center for American Progress and Achieve Inc., a group that works with
states to raise academic standards.
All protested that a proposal in the bill for a
pilot program that would allow districts to devise their own measures of
student progress, rather than using statewide tests, would gut the law’s
intent of demanding that schools teach all children, regardless of poverty,
race or other factors, to the same standard.
Dianne M. Piché, executive director of the
Citizens’ Commission on Civil Rights, said the bill had “the potential to
set back accountability by years, if not decades,” and would lead to lower
standards for children in urban and high poverty schools.
“It strikes me as not unlike allowing my teenage
son and his friends to score their own driver’s license tests,” Ms. Piché
said, adding, “We’ll have one set of standards for the Bronx and one for
Westchester County, one for Baltimore and one for Bethesda.”
Continued in article
"Obama’s Union-Friendly,
Feel-Good Approach to Education." by Kyle Olson, Townhall, March 30,
2011 ---
http://townhall.com/columnists/kyleolson/2011/03/30/obama%E2%80%99s_union-friendly,_feel-good_approach_to_education
The Obama administration, principally the president
and Education Secretary Arne Duncan, are now routinely making public
statements which are leading to one conclusion: instead of fixing American
education, we should dumb down the standards.
According to the
Associated Press, President Obama “is pushing a
rewrite of the nation’s education law that would ease some of its rigid
measurement tools” and wants “a test that ‘everybody agrees makes sense’ and
administer it in less pressure-packed atmospheres, potentially every few
years instead of annually.”
The article goes on to say that Obama wants to move
away from proficiency goals in math, science and reading, in favor of the
ambiguous and amorphous goals of student readiness for college and career.
Obama’s new focus comes on the heels of a
New York Times report that 80% of American public
schools could be labeled as failing under the standards of No Child Left
Behind.
Put another way: the standards under NCLB have
revealed that the American public education system is full of cancer.
Instead of treating the cancer, Obama wants to change the test, as if
ignoring the MRI somehow makes the cancer go away.
So instead of implementing sweeping policies to
correct the illness, Obama is suggesting that we just stop testing to
pretend it doesn’t exist.
If Obama were serious about curing the disease, one
of the best things he could do is to ensure that there is a quality teacher
in every classroom in America. Of course, that would mean getting rid
teacher tenure and scrapping seniority rules that favor burned-out teachers
over ambitious and innovative young teachers.
That means standing up to the teacher unions. For a
while, it looked like Obama would get tough with the unions, but not
anymore. With a shaky economy and three wars, it looks like Obama’s
re-election is in serious jeopardy. He needs all hands on deck – thus the
new union-friendly education message.
Obama’s new direction will certainly make the
unionized adults happy. They’ve hated NCLB from the get-go.
And the unions will love Obama’s talk about using
criteria other than standardized testing in evaluating schools.
He doesn’t get specific, of course, but I bet I can
fill in the gaps. If testing is too harsh, perhaps we can judge students and
schools based on how hard they try or who can come up with the most
heart-wrenching excuse for failure or how big the dog was that ate their
homework.
Continued in article
"Department
of Injustice," by Walter E. Williams, Townhall, March 30. 2011 ---
http://townhall.com/columnists/walterewilliams/2011/03/30/department_of_injustice
One of the requirements to become a Dayton, Ohio
police officer is to successfully pass the city's two-part written
examination. Applicants must correctly answer 57 of 86 questions on the
first part (66 percent) and 73 of 102 (72 percent) on the second part.
Dayton's Civil Service Board reported that 490 candidates passed the
November 2010 written test, 57 of whom were black. About 231 of the roughly
1,100 test takers were black.
The U.S. Department of Justice, led by Attorney
General Eric Holder, rejected the results of Dayton's Civil Service
examination because not enough blacks passed. The DOJ has ordered the city
to lower the passing score. The lowered passing grade requires candidates to
answer 50 of 86 (58 percent) questions correctly on the first part and 64 of
102 (63 percent) of questions on the second. The DOJ-approved scoring policy
requires potential police officers to earn the equivalent of an "F" on the
first part and a "D" on the second. Based on the DOJ-imposed passing scores,
a total of 748 people, 258 more than before, were reported passing the exam.
Unreported was just how many of the 258 are black.
Keith Lander, chairman of the Dayton chapter of the
Southern Christian Leadership Conference, and Dayton NAACP president Derrick
Foward condemned the DOJ actions.
Mr. Lander said, "Lowering the test score is
insulting to black people," adding, "The DOJ is creating the perception that
black people are dumb by lowering the score. It's not accomplishing
anything."
Mr. Foward agreed and said, "The NAACP does not
support individuals failing a test and then having the opportunity to be
gainfully employed," adding, "If you lower the score for any group of
people, you're not getting the best qualified people for the job."
I am pleased by the positions taken by Messrs.
Lander and Foward. It is truly insulting to suggest that black people cannot
meet the same standards as white people and somehow justice requires lower
standards. Black performance on Dayton's Civil Service exam is really a
message about fraudulent high school diplomas that many black students
receive.
Continued in article
What works in education?
As I said previously, great teachers come in about as many varieties as
flowers. Click on the link below to read about some of the varieties
recalled by students from their high school days. I t should be noted that
"favorite teacher" is not synonymous with "learned the
most." Favorite teachers are often great at entertaining and/or
motivating. Favorite teachers often make learning fun in a variety of
ways.
However, students may actually learn the most from pretty dull teachers with
high standards and demanding assignments and exams. Also dull teachers may
also be the dedicated souls who are willing to spend extra time in one-on-one
sessions or extra-hour tutorials that ultimately have an enormous impact on
mastery of the course. And then there are teachers who are not so
entertaining and do not spend much time face-to-face that are winners because
they have developed learning materials that far exceed other teachers in terms
of student learning because of those materials.
The recollections below tend to lean toward entertainment and "fun"
teachers, but you must keep in mind that these were written after-the-fact by
former high school teachers. In high school, dull teachers tend not to be
popular before or after the fact. This is not
always the case when former students recall their college professors.
Handicapped Learning Aids Work Wonders ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Handicapped
Asynchronous Learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Especially note the SCALE Experiments conducted at the University of
Illinois ---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
"'A dozen roses to my favorite teacher," The Philadelphia Enquirer,
November 30, 2004 --- http://www.philly.com/mld/inquirer/news/special_packages/phillycom_teases/10304831.htm?1c
January 6, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
No Significant Difference Phenomenon website
http://www.nosignificantdifference.org/
The website is a companion piece to Thomas L.
Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of
355 research reports, summaries, and papers that document no significant
differences in student outcomes between alternate modes of education
delivery.
Mathematics Assessment Project (learning assessment) ---http://map.mathshell.org
American Council on Education - GED Testing ---
http://www.acenet.edu/Content/NavigationMenu/ged/index.htm
Classroom Tips
Yes! 50 Scientifically Proven Ways to Be Persuasive
From the Financial Rounds Blog on May 4, 2009 ---
http://financialrounds.blogspot.com/
Using "Yes! 50 Scientifically Proven Ways to Be
Persuasive" In The Classroom I recently started reading Goldstein, Martin,
and Cialdini's "Yes!: 50 Scientifically Proven Ways to Be Persuasive." It
could easily be described as "Freakonomics for Social Psychology". It's a
fun, easy, and very informative read, with each chapter only about 1500-2000
words long, and highlighting one persuasion technique. So, you can knock out
a chapter in 10 minutes or so.
It's a very interesting introduction to the social
psychology literature on persuasion - it lists all the underlying research
in the appendix.
In addition to learning some interesting things,
I've also gotten some great ideas to use in my classes. I'll be discussing
these over the next few weeks, starting with
Chapters 1 & 2:
"The Bandwagon effect" One way to increase compliance with a request is to
mention that a lot of other people have done the same thing. In these
chapters, the authors mention a study where they tried to see if they could
increase the percentage of people staying in a hotel who reused towels at
least once during their stay. Their solution was simple. The hotels who do
this typically put a little card in the hotel room touting the benefits of
reusing towels. All they did was add a line to the extent that the majority
of people who stay in hotels do in fact reuse their towels at least once
during their stay. This dramatically increased the percentage of people who
chose to reuse.
In a related study, they added another line stating
that XX% of the people who stayed in this room reused towels. This increased
compliance even more.
Chapter 3:
"What common mistake causes messages to self-destruct?" The bandwagon effect
can also cause messages to backfire. In one study, they seeded the Petrified
Forest with fake pieces of petrified wood, and then posted signs stating
that "many past visitors have removed the petrified wood from the park,
changing the natural state of the petrified forest", accompanied by a
picture of several visitors to taking pieces of wood. These signs actually
increased the incidences of the behavior they were intended to stop. Here
are the applications to my classes: First off, to use the bandwagon effect
in my case course, I'm going to state figures (made up, of course) at the
beginning of class as to the average amount of time past students in that
class have spent preparing each week. I'm also going to tell my classes that
the average evaluation for the professors in the college ranges from 4.2 to
4.8 on a 5 point scale (I know, it's inflated, but it might be interesting
to see what happens if I state that several times during the semester). If I
really want to use the bandwagon effect, I'll mention that evaluations in
THAT particular class have been a bit higher.
As for avoiding the "self-destruct" part of the
bandwagon effect, I plan on spending less time talking about how many
students are absent. If I need to mention it, I'll focus on the flip side
that 94% of the students in this class make the vast majority of classes,
and commend them on that fact.
More to come later. It's a great book, and
inexpensive, too (the paperback is less than $20).
In Defense of "Traditional" Learning and Assessment
April 27, 2009 message from David Albrecht
[albrecht@PROFALBRECHT.COM]
Bob,
Here's another article from the CHE newsletter.
The conclusion from these latest two articles rings true.
Collegiate business courses in general, and collegiate accounting
courses, in particular, have taken their fair share of hits in recent
years, because of the lack of experiential learning built into the
curriculum and so many courses. The traditional approach to collegiate
instruction--lecture and (MC) testing--is too frequently assailed
because students don't become active participants in the learning
process. Never-the-less, accounting students across the country do pick
up on the rules of financial and tax accounting, and the logic of cost
accounting and auditing. I've frequently wondered where the missing
piece is, how a discredited approach to conducting college courses can
produce any learning results at all.
My own thinking had begun to focus on the recitation/homework aspect
built into so many of our courses, and the results of these two studies
seems to it up.
I have made extensive use of homework assignments over the years, to
the extent that I write my own problems. A HW set for a particular topic
moves from very short "drills" to comprehensive problems that set the
topic into a very realistic setting. What I do isn't unique. However, I
have my own idea about what is realistic.
Anyway, I find this latest news to be a validation for a part of what
we do, and welcome news indeed.
Access to the article below requires a subscription. The part of the
article not quoted IS important, as it pertains to real world
applications.
Dave Albrecht
******quotation begins******
http://chronicle.com/weekly/v55/i34/34a00101.htm
From the issue dated May 1, 2009 Close the
Book. Recall. Write It Down. That old study method still works,
researchers say. So why don't professors preach it?
By DAVID GLENN
The scene: A rigorous intro-level survey
course in biology, history, or economics. You're the instructor, and
students are crowding the lectern, pleading for study advice for the
midterm.
If you're like many professors, you'll tell
them something like this: Read carefully. Write down unfamiliar
terms and look up their meanings. Make an outline. Reread each
chapter.
That's not terrible advice. But some
scientists would say that you've left out the most important step:
Put the book aside and hide your notes. Then recall everything you
can. Write it down, or, if you're uninhibited, say it out loud.
Two psychology journals have recently
published papers showing that this strategy works, the latest
findings from a decades-old body of research. When students study on
their own, "active recall" ¬ recitation, for instance, or flashcards
and other self-quizzing ¬ is the most effective way to inscribe
something in long-term memory.
Yet many college instructors are only dimly
familiar with that research. And in March, when Mark A. McDaniel, a
professor of psychology at Washington University in St. Louis and
one author of the new studies, gave a talk at a conference of the
National Center for Academic Transformation, people fretted that the
approach was oriented toward robotic memorization, not true
learning.
Don't Reread
A central idea of Mr. McDaniel's work,
which appears in the April issue of Psychological Science and the
January issue of Contemporary Educational Psychology, is that it is
generally a mistake to read and reread a textbook passage. That
strategy feels intuitively right to many students ¬ but it's much
less effective than active recall, and it can give rise to a false
sense of confidence.
"When you've got your chemis-try book in
front of you, everything's right there on the page, it's all very
familiar and fluent," says Jeffrey D. Karpicke, an assistant
professor of psychology at Purdue University and lead author of a
paper in the May issue of Memory about students' faulty intuitions
about effective study habits.
"So you could say to yourself, 'Yeah, I
know this. Sure, this is all very familiar,'" Mr. Karpicke
continues. "But of course, when you go in to take a classroom test,
or in real life when you need to reconstruct your knowledge, the
book's not there. In our experiments, when students repeatedly read
something, it falsely inflates their sense of their own learning."
These findings about active recall are not
new or faddish or parochial. The research has been deepened and
systematized recently by scholars at the University of California at
Los Angeles and Washington University in St. Louis (where Mr.
Karpicke earned his doctorate in 2007). But the basic insight goes
back decades. One of the new papers tips its hat to a
recitation-based method known as "SQ3R," which was popularized in
Effective Study, a 1946 book by Francis P. Robinson.
So if this wisdom is so well-established ¬
at least among psychologists ¬ should colleges explicitly try to
coax students to use these study techniques? And if so, how? That is
the question that the authors of these papers are now pondering.
"I think it's a mistake for us to think
that just publishing this work in a few journals is going to have a
huge impact in the classroom," says Mr. McDaniel.
After a decade of working in this area, Mr.
McDaniel feels enough confidence in his findings that he is willing
to proselytize about them. He and his colleagues have also been
promoting the idea of frequent low-stakes classroom quizzes (The
Chronicle, June 8, 2007).
Among other things, Mr. McDaniel has
recently collaborated with a network of biology instructors who
would like to improve the pass rates in their introductory courses.
One of those scholars is Kirk Bartholomew,
an assistant professor of biology at Sacred Heart University. He
first crossed paths with Mr. McDaniel at a conference sponsored by a
textbook publisher.
"He basically confirmed my ideas ¬ that
after you've read something once, you've gotten what you're going to
get out of it, and then you need to go out and start applying the
information," Mr. Bartholomew says.
The two scholars collaborated on a Web
interface that encouraged students to try different study
techniques. The first round of research did not turn up any dramatic
patterns, Mr. Bartholomew says ¬ other than the unsurprising fact
that his students did better if they spent more time studying. But
he says that he looks forward to refining the system.
Rote learning?
In March, however, when Mr. McDaniel took
his message to the National Center for Academic Transformation
meeting, his talk was not entirely well received.
Several days after his appearance, he got a
note from Carol A. Twigg, the center's chief executive. "She said,
'We really loved having you, but you created some controversy
here,'" Mr. McDaniel says. According to Ms. Twigg's note, some
people worried that Mr. McDaniel's techniques might generate rote
memorization at the expense of deeper kinds of learning.
Michael R. Reder, director of Connecticut
College's Center for Teaching and Learning, had a similar reaction
to one of Mr. McDaniel's new papers on studying.
The paper seems perfectly valid on its own
terms and might offer a "useful tool," Mr. Reder says. But in his
view, the paper also "suggests an old model of learning. You know,
I'm going to give information to the students, and the students then
memorize that information and then spit it back."
Mr. McDaniel finds such reactions
frustrating. One experiment in his new paper suggests that a week
after reading a complex passage, people who recited the material
after reading it did much better at solving problems that involved
analyzing and drawing inferences from the material than did people
who simply read the passage twice.
"I don't think these techniques will
necessarily result in rote memorization," Mr. McDaniel says. "If you
ask people to free-recall, you can generate a better mental model of
a subject area, and in turn that can lead to better
problem-solving."
And in some college courses, he continues,
a certain amount of memorization is impossible to escape ¬ so it
might as well be done effectively.
In Biology 101, for example, "you've got a
heavily fact-laden course. When I talk to biology instructors at Big
Ten universities, they're working really hard to create interesting,
interactive courses where they've got 500 or 600 kids in a lecture
class. But no matter how engaging you make the course, the students
need to have the knowledge base to do the inquiry-based
problem-solving activities that you've designed."
continued in article
******quotation ends*******
"Imagining College Without Grades," by Scott Jaschik, Inside Higher
Ed, January 22, 2009 ---
http://www.insidehighered.com/news/2009/01/22/grades
Kathleen
O’Brien, senior vice president for academic affairs at
Alverno College, said she realized that it might seem like
the panelists were “tilting at windmills” with their vision
for moving past grades. But she said there may be an
alignment of ideas taking place that could move people away
from a sense that grades are inevitable. First, she noted
that several of the nation’s most
prestigious law schools have moved away from traditional
letter grades, citing a sense that
grades were squelching intellectual curiosity. This trend
adds clout to the discussion and makes it more difficult for
people to say that grades need to be maintained because
professional schools value them. Second, she noted that the
growing use of e-portfolios has dramatized the potential for
tools other than grades to convey what students learn.
Third, she noted that just about everyone views grade
inflation as having destroyed the reliability of grades.
Fourth, she said that with more students taking courses at
multiple colleges — including colleges overseas — the idea
of consistent and clear grading just doesn’t reflect the
mobility of students. And fifth, she noted the reactions in
the room, which are typical of academic groups in that most
professors and students are much more likely to complain
about grading than to praise its accuracy or value. This is
a case of an academic practice, she noted, that is
widespread even as many people doubt its utility.
At the same
time, O’Brien said that one thing holding back colleges from
moving was the sense of many people that doing away with
grades meant going easy on students. In fact, she said,
ending grades can mean much more work for both students and
faculty members. Done right, she said, eliminating grades
promotes rigor.
Continued in article
"Favorite Education Blogs of 2008," by Jay Mathews, The Washington
Post, April 7, 2008 ---
Click Here
Early last year, as
an experiment, I published a
list of what I and
commentator Walt Gardner considered our favorite education blogs. Neither
Gardner nor I had much experience with this most modern form of expression.
We are WAY older than the Web surfing generation. But the list proved
popular with readers, and I promised in that column to make this an annual
event.
Bernstein: The name is obviously a takeoff on the
foregoing. The author of this one occasionally posts elsewhere as well. This
site often provides some incisive and clear explanations of the key aspects
of educational policy.
Mathews: I agree, but have a bias here, too. This
is an Education Week blog, and I am on the board of trustees of the
nonprofit that publishes Ed Week.
My promise was actually more specific: "Next year,
through bribery or trickery, I hope to persuade Ken Bernstein, teacher and
blogger par excellence, to select his favorite blogs and then let me dump on
his choices, or something like that." As I learned long ago, begging works
even better than bribery or trickery, and Bernstein succumbed. Below are his
choices, with some comments from me, and a few of my favorites.
They are in no particular order of quality or
interest. Choosing blogs is a personal matter. Tastes differ widely and
often are not in sync with personal views on how schools should be improved.
I agree with all of Bernstein's choices, even though we disagree on many of
the big issues.
Bernstein is a splendid classroom teacher and a
fine writer, with a gift for making astute connections between
ill-considered policies and what actually happens to kids in school. He is a
social studies teacher at Eleanor Roosevelt High School in Prince George's
County and has been certified by the prestigious National Board for
Professional Teaching Standards. He is also a book reviewer and peer
reviewer for professional publications and ran panels on education at
YearlyKos conventions. He blogs on education, among other topics, at too
many sites to list. He describes his choices here as a few blogs he thinks
"are worthwhile to visit."
· Bridging Differences.
blogs.edweek.org/edweek/Bridging-Differences/
Bernstein: Deborah Meier and Diane Ravitch in the
past have had their differences on educational issues. They both serve at
the Steinhardt School of Education at New York University, and this shared
blog is as valuable as anything on the Web for the insights the two offer,
and for the quality of their dialog.
Mathews: I have a personal bias about this blog. I
know Meier and Ravitch well, consider them the best writers among education
pundits today and frequently bounce ideas off them.
· Eduwonk.
www.eduwonk.com/
Bernstein: I often disagree with Andrew J.
Rotherham, but his has been an influential voice on education policy for
some years, and even now, along with all else he does, he serves on the
Virginia Board of Education.
Mathews: I often agree with Rotherham, and my
editors sometimes complain that I quote him too much. But the guy is only 37
and is going to be an important influence on public school policy for the
rest of my life and long after.
· Edwize.
www.edwize.org/
Bernstein: The site is maintained by the United
Federation of Teachers, the New York affiliate of American Federation of
Teachers. They have a number of authors, many active in New York schools,
but they occasionally have posts from others. Full disclosure: I have been
invited to cross-post things I have written elsewhere.
Mathews: A nice mix of both comment on policy and
inside-the-classroom stuff from teachers.
· Education Policy Blog.
educationpolicyblog.blogspot.com/
Bernstein: The site describes itself as "a
multiblog about the ways that educational foundations can inform educational
policy and practice! The blog will be written by a group of people who are
interested in the state of education today, and who bring to this interest a
set of perspectives and tools developed in the disciplines known as the
'foundations' of education: philosophy, history, curriculum theory,
sociology, economics and psychology." Most of the participants are
university professors. I am a participant from time to time in this blog.
Eduwonkette.
blogs.edweek.org/edweek/eduwonkette/
Continued in article
Bob Jensen's threads on blogs are at
http://faculty.trinity.edu/rjensen/ListservRoles.htm
Learning Styles Sites
January 1, 2009 message from Pat Wyman
[raisingsmarterchildren@gmail.com]
Hello Bob,
Happy New Year! Your name came up through a google
alert, attached to my website and the complimentary learning styles
inventory at
http://www.howtolearn.com
It is on your page, from the community at
http://www.elearninglearning.com/learning-styles/microsoft/&query=www.howtolearn.com
I want to thank you for this is and if there is any way I can contribute
to your blog and yours to mine, articles, interviews, etc. I'd love to
connect with you.
You're doing wonderful work!
Warmly,
Pat Wyman, M.A.
-- Pat Wyman Best selling author, Learning vs.
Testing Co-Author,
Book Of The Year In the Medicine Category, The Official Autism 101 Manual
University Instructor of Continuing Education, California State University,
East Bay Founder,
http://www.HowToLearn.com and
http://wwwRaisingSmarterChildren.com
Winner, James Patterson PageTurner Award Get your copy of Learning vs.
Testing with complimentary materials at http://www.learningvstesting4.html
Get Tips For Raising A Smarter Child at
http://www.RaisingSmarterChildren.com
"There are two ways you can live your life - one as
if nothing is a miracle, and the other as if everything is a miracle."
Albert Einstein
Bob Jensen's threads on assessment and learning ---
http://faculty.trinity.edu/rjensen/assess.htm
Bob Jensen's threads on metacognitive learning ---
http://faculty.trinity.edu/rjensen/265wp.htm
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on education technology ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
April 4, 2008 message from Carolyn Kotlas
[kotlas@email.unc.edu]
ASSESSING EFFECTIVENESS OF NEW TECHNOLOGIES
"From the perspective of instructional designers
and instructors, the decision to adopt a new technology can be exceedingly
difficult. On the one hand, we all want to create the best possible learning
environment for our students. On the other, there is the persistent fear
that integrating a new technology will be onerous in terms of integration
and only marginal in terms of impact, or worse, it may have a negative
impact."
In "How Do We Assess the Effectiveness of New
Technologies and Learning Environments?" (SLOAN-C VIEW, vol. 7, issue 2,
February 2008), Philip Ice suggests using the Community of Inquiry Framework
(CoI): "a theoretical model that seeks to explain the online learning
experience in terms of three overlapping presences: teaching, social and
cognitive." He cites two studies that support the application of CoI for
exploring the impact of new technologies in education. The article,
including links to the cited studies, is available at
http://www.aln.org/publications/view/v7n2/viewv7n2.htm
(Please note: registration is required to view some
articles; registration is free.)
Sloan-C View: Perspectives in Quality Online
Education [ISSN:
1541-2806] is published by the Sloan Consortium
(Sloan-C). Current and back issues are available at
http://www.aln.org/publications/view/
For more information, contact: The Sloan Center at Olin and Babson Colleges,
Franklin W. Olin College of Engineering, Olin Way, Needham MA 02492-1200
USA; tel: 781-292-2523; fax: 781-292-2505;
email: info@sloan-c.org ;
Web:
http://www.sloan-c.org/
Sloan-C is a consortium of institutions and
organizations committed "to help learning organizations continually improve
quality, scale, and breadth of their online programs according to their own
distinctive missions, so that education will become a part of everyday life,
accessible and affordable for anyone, anywhere, at any time, in a wide
variety of disciplines." Sloan-C is funded by the Alfred P. Sloan
Foundation.
DO STUDENTS PREFER INTENSIVE COURSES?
Researchers at the University of Texas at Austin
conducted a study to determine which was preferred by students: "regular"
courses (typical for traditional, residential institutions) or "intensive"
courses -- "those taught on a tighter than normal schedule, with more class
time each week, but fewer weeks" (typical of online courses taught at
for-profit institutions). Students rated the
intensive courses significantly higher, causing
the researchers to suggest that residential colleges may want to consider
offering more courses of this type.
Results of the study were presented at the 2008
Annual Meeting of the American Educational Research Association. An article
about the research (along with reader comments) is available:
"Students Prefer Intensive Courses"
INSIDE HIGHER ED, March 28, 2008,
http://www.insidehighered.com/news/2008/03/28/intensive
Using Field Lab Write-ups to Develop Observational and Critical Thinking Skills
---
http://serc.carleton.edu/NAGTWorkshops/structure04/activities/3856.html
Does technology have no discernable impact on learning?
I've never been a disciple of technology. For me cell
phones are multifunctional, multicolor devices that empower millions of us with
little worth saying to interrupt other millions of us who ought to have
something better to do. I don't want my car to talk to me, I don't want General
Motors to know my latitude and longitude, and I don't need a pocket-size liquid
crystal New York Times or instant access to thirty-second videos of
skateboarding dogs , , , Many American students aren't doing all that well
academically, and almost as many experts are peddling cures. Many prescribe
computers as the miracle that will rescue our kids from scholastic mediocrity.
That's why states like Michigan and Pennsylvania distributed laptops to
thousands of students. Maine led the parade by handing out laptops to every
seventh and eighth grader. Sponsors of the giveaways promised "higher student
performance." Unfortunately, the results have been disappointing. When the test
results of Maine students showed no improvement, boosters explained that it
would "take more time for the impact of laptops to show up." Inconveniently,
Maine's lackluster outcome only confirmed a rigorous international study of
student computer use in thirty-one countries, which found that students who use
computers at school "perform sizably and statistically worse" than students who
don't. Analysts warned that when computer use replaces "traditional learning
methods," it "actually harms the student." A review of California schools
determined that Internet access had "no measurable impact on student
achievement." A 2007 federal study concluded that classroom use of reading and
math software likewise yielded "no significant differences" in student
performance.
Peter Berger, "Stuck on the Cutting Edge," The Irascible Professor,
December 19, 2007 ---
http://irascibleprofessor.com/comments-12-19-07.htm
Jensen Comment
Anecdotally technology can favorably impact learning. In my own case, it's had
an enormous positive impact on my scholarship, my research, and my publishing.
Number 1 are the communications and knowledge sharing (especially from listservs
and blogs) ---
http://faculty.trinity.edu/rjensen/ListServRoles.htm
Number 2 is the access to enormous databases and knowledge portals ---
http://faculty.trinity.edu/rjensen/Searchh.htm
Number 3 is the tremendous increase in access provided by the campus
libraries for scholars who take the time and effort to determine what is really
there.
Number 4 is open courseware. The open courseware (especially shared lecture
materials and videos) from some of the best professors in our leading
universities such as 1,500 courses served up by MIT and 177 science courses
served up on YouTube by UC Berkeley are truly amazing. Critics of technology
have probably never utilized these materials ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
I think Peter Berger overlooks some of the positive outcomes of technology on
learning ---
http://faculty.trinity.edu/rjensen/Assess.htm#WhatWorks
More importantly look at the SCALE experiments at the University of Illinois ---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
Although I always like Peter Berger's essays, this time he also overlooks
much of the dark side of technology are learning ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Technology and learning have much more complicated interactions that are
superficially glossed over in this particular essay ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
"Beyond
Tests and Quizzes," Inside Higher Ed, December 5, 2007 ---
http://www.insidehighered.com/news/2007/12/05/mezeske
With federal and state
officials, accreditors and others all talking about the importance of
assessment, what’s going on in classrooms? Assessment, after all, takes
place every time a professor gives a test. A new volume of essays,
Beyond Tests and Quizzes: Creative Assessments in the College Classroom
(Jossey-Bass) argues that
assessments in the classroom could be more creative and more useful to the
educational process. The editors of the volume are Richard Mezeske, chair of
education at Hope College, and Barbara A. Mezeske, an associate professor of
English at Hope. In an e-mail interview, they discussed the themes of their
new book. . .
Q: Could you
share your definition of “creative assessment” and some of
your favorite examples?
A:
Creative assessment is flexible, timely, and
interesting to both the instructor and to the
student. When teachers shift instruction based on student
feedback, then they are being flexible and creative. We do
not mean that teachers should design ever more imaginative
and bizarre assessment tools, or that they should ignore
mandated curricular content. Rather, creative assessment, as
we use the term, implies focused attention to student
learning, reading the signs, engaging students, and
listening to their feedback. Creative assessment often gives
students opportunities to apply and deepen their superficial
knowledge in their discipline.
For example,
in the chapter in our book about teaching grammar, Rhoda
Janzen describes an assessment that requires students to
devise and play grammar games: They cannot do that without a
deep mastery of the principles they are learning. In another
chapter, Tom Smith describes how he grades individuals’
tests during private office appointments: He affirms correct
responses, asks students to explain incomplete or erroneous
answers, and both gives and gets immediate, personal
feedback on a student’s ability to recall and apply
concepts. In a third chapter, David Schock writes about
taking media-production skills into the community, allowing
students to demonstrate their knowledge and skills by
creating public service announcements and other media
products for an audience outside the classroom.
Q: How is
technology (the Web, etc.) changing the potential of testing
and assessment?
A:
Technology is expanding the possibilities for assessment
while at the same time complicating assessment. For example,
checking understanding of a group and individuals during
instruction is now relatively simple with electronic tools
which allow students to press a button and report what they
believe about concept X. The results are instantaneously
displayed for an entire class to see and the instructor can
adjust instruction based on that feedback. However,
technology can complicate, too. How is a teacher able to
guarantee student X working at a remote computer station on
an assessment is actually student X, and not student Y
covering for student X? Does the technology merely make the
assessment tool slick without adding substance to the
assessment? In other words, merely using technology does not
automatically make the assessment clever, substantive,
correct, or even interesting, but it can do all of
those things.
Continued in article
"The Great Debate: Effectiveness of Technology in Education," by
Patricia Deubel, T.H.E. Journal, November 2007 ---
http://www.thejournal.com/articles/21544
According to Robert Kuhn (2000), an expert in brain
research, few people understand the complexity of that change. Technology is
creating new thinking that is "at once creative and innovative, volatile and
turbulent" and "nothing less than a shift in worldview." The change in
mental process has been brought about because "(1) information is freely
available, and therefore interdisciplinary ideas and cross-cultural
communication are widely accessible; (2) time is compressed, and therefore
reflection is condensed and decision-making is compacted; (3) individuals
are empowered, and therefore private choice and reach are strengthened and
one person can have the presence of an institution" (sec: Concluding
Remarks).
If we consider thinking as both individual
(internal) and social (external), as Rupert Wegerif (2000) suggests, then "[t]echnology,
in various forms from language to the internet, carries the external form of
thinking. Technology therefore has a role to play through supporting
improved social thinking (e.g. providing systems to mediate decision making
and collective reasoning) and also through providing tools to help
individuals externalize their thinking and so to shape their own social
worlds" (p. 15).
The new tools for communication that have become
part of the 21st century no doubt contribute to thinking. Thus, in a debate
on effectiveness or on implementation of a particular tool, we must also
consider the potential for creativity, innovation, volatility, and
turbulence that Kuhn (2000) indicates.
Continued in article
Bob Jensen's threads on education technology are at
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Questioning the Admissions Assumptions
And further, the study finds that all of the
information admissions officers currently have (high school grades,
SAT/ACT scores, essays, everything) is of limited
value, and accounts for only 30 percent of the grade variance in colleges —
leaving 70 percent of the variance unexplained.
Scott Jaschik, "Questioning the Admissions Assumptions," Inside Higher Ed, June
19, 2007 ---
http://www.insidehighered.com/news/2007/06/19/admit
The report is available at
http://cshe.berkeley.edu/publications/docs/ROPS.GEISER._SAT_6.12.07.pdf
Roland G. Fryer, who was hired by Schools Chancellor
Joel I. Klein to advise him on how to narrow the racial gap in achievement in
the city’s schools, made his professional name in economics by applying complex
algorithms to document how black students fall behind their white peers. But his
life story challenges his own calculations. . . . His first job, though, he
said, will be to mine data — from graduation rates to test scores to demographic
information — to find out why there are wide gulfs between schools. Why, for
example, does one school in Bedford-Stuyvesant do so much better than a school
just down the block? And he will monitor the pilot program to pay fourth- and
seventh-grade students as much as $500 for doing well on a series of
standardized tests. That program will begin in 40 schools this fall. He hopes to
find other ways to motivate students.
Jennifer Medina, "His Charge: Find a Key to Students’ Success," The New York
Times, June 21, 2007 ---
http://www.nytimes.com/2007/06/21/nyregion/21fryer.html?_r=1&oref=slogin
Jensen Comment
I suspect that SAT scores are more predictive for some college graduates than
others. For example. SAT math performance may be a better predictor of grades in
mathematics and science courses than SAT verbal performance is a predictor of
grades in literature and language courses. The study mentioned above does not
delve into this level of detail. Top universities that have dropped SAT
requirements (e.g., under the Texas Top Ten Percent Law) are not especially
happy about losing so many top SAT performers ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#10PercentLaw
SAT/ACT testing falls down because it does not examine motivation vary well.
High school grades fail because of rampant grade inflation and lowered academic
standards in high schools. College grades are not a good criterion because of
grade inflation in colleges ---
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
Question
Is homework credit sometimes dysfunctional to learning?
If the instructor allows face-to-face study groups, extra-help tutorials, and
chat rooms, what is so terrible about this Facebook study group?
Answer
Apparently its the fact that ten percent course credit was given for homework
that was discussed in the study group. It seems unfair, however, to single out
this one student running the Facebook study group. If the students were
"cheating" by sharing tips on homework, they were probably also doing it
face-to-face. All students who violate the code of conduct should be sanctioned
or forgiven based on the honor code of the institution.
Ryerson U. Student Faces
Expulsion for Running a Facebook Study Group
A student at Ryerson
University, in Toronto, is facing expulsion for running a Facebook study group,
the
Toronto Star reports. Chris Avenir, a
first-year engineering student, is facing expulsion from the school on 147
counts of academic charges — one for himself, and one for every student who used
the Facebook group “Dungeons/Mastering Chemistry Solutions” to get homework
help. University officials say that running such a group is in violation of the
school’s academic policy, which says no student can undertake activity to gain
academic advantage. Students argue, however, that the group was analogous to any
in-person study group. Of course, this wouldn’t be the first
Facebook-related expulsion hearing. The
expulsion hearing is scheduled for Tuesday.
Hurley Goodall,
Chronicle of Higher Education, March 7, 2008 ---
http://chronicle.com/wiredcampus/index.php?id=2801&utm_source=wc&utm_medium=en
Jensen Comment
My approach was to assign homework for no credit and then administer online
quizzes. Students were assigned different partners each week who attested to
observing no cheating while an assigned "partner" took the online quiz. You can
read the following at ---
http://faculty.trinity.edu/rjensen/acct5342/acct5342.htm
Most every week beginning in
Week 2, you will be required to take an online quiz for a chapter from the
online textbook by Murthy and Groomer. This book is not in the bookstore.
Students should immediately obtain a password and print the first three
chapters of the book entitled
Accounting Information Systems: A Database Approach. You can purchase a
password at
http://www.cybertext.com/forms/accountform.shtml
You will then be able to access the book and the online quizzes at any time
using the book list at
http://www.cybertext.com/
Each week students are to take an online quiz in the presence of an assigned
student partner who then signs the attest form at
http://faculty.trinity.edu/rjensen/acct5342/attest.htm
The online quizzes are relatively easy if you take notes while reading the
assigned chapter. You may use your notes for each quiz. However, you may
not view a copy of the entire chapter will taking a quiz.
Bob Jensen's threads on cheating are at
http://faculty.trinity.edu/rjensen/Plagiarism.htm
Lawyers Don't Like Being Ranked
It's a sunny day in Seattle when two lawyers can bring
a class action suit on their own behalf -- and then see it rejected on First
Amendment grounds. That's what happened last week in the Emerald City, when
Federal District Judge Robert S. Lasnik ruled that there was no basis for
cracking down on a lawyer-rating Web site merely because some of its ratees
didn't like how they were portrayed. The site, called Avvo, does for lawyers
what any number of magazines and Web sites have been doing for other professions
for years. Magazines regularly publish stories that rank an area's doctors and
dentists. There are rating sites and blogs for the "best" hairstylists,
manicurists, restaurants and movie theaters. Almost any consumer product or
service these days is sorted and ranked.
"Judging Lawyers," The Wall Street Journal, December 24, 2007; Page A10
---
http://online.wsj.com/article/SB119846335960848261.html
Avvo Lawyer Ratings ---
http://www.avvo.com/
Jensen Comment
In fairness most of these ranking systems are misleading. For example,
physicians and lawyers who lose more often may also be willing to take on the
tougher cases having low probabilities of success. Especially note
"Challenging Measures of Success" at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
And some professionals that win a lot may do so because they do so in
unethical ways. And lawyers, like physicians, have different specialties such
that in the realm of a particular specialty, maybe one that rarely call out,
from over 100 specialties, they may be outstanding.
Bob Jensen threads on college ranking controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
Those conclusions come
from
a national survey of employers
with at least 25 employees and significant
hiring of recent college graduates, released
Tuesday by the Association of American
Colleges and Universities. Over all, 65
percent of those surveyed believe that new
graduates of four-year colleges have most or
all of the skills to succeed in entry-level
positions, but only 40 percent believe that
they have the skills to advance.
. .
.
In
terms of specific skills, the employers didn’t give many A’s
or fail many either. The employers were asked to rank new
graduates on 12 key areas, and the grads did best in
teamwork, ethical judgments and intercultural work, and
worst in global knowledge, self-direction and writing.
Employers Ratings of College
Graduates Preparedness on 1-10 Scale
Category |
Mean
Rating |
%
giving high (8-10) rating |
%
giving low (1-5) rating |
Teamwork |
7.0 |
39% |
17% |
Ethical judgment |
6.9 |
38% |
19% |
Intercultural skills |
6.9 |
38% |
19% |
Social responsibility |
6.7 |
35% |
21% |
Quantitative reasoning |
6.7 |
32% |
23% |
Oral communication |
6.6 |
30% |
23% |
Self-knowledge |
6.5 |
28% |
26% |
Adaptability |
6.3 |
24% |
30% |
Critical thinking |
6.3 |
22% |
31% |
Writing |
6.1 |
26% |
37% |
Self-direction |
5.9 |
23% |
42% |
Global knowledge |
5.7 |
18% |
46% |
To
the extent that employers give graduates mixed grades, that
raises the question of how they determine who is really
prepared. Many of the existing tools appear to be
insufficient, the poll found.
Continued in
article
Jensen Comment
This study is misleading in the sense that large employers generally hire
above-average graduates. This skews the results upward with respect to the
entire population of college graduates. Colleges have a long way to go in modern
times.
Bob Jensen's threads higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Not too surprising that
students who did well on the SAT would also perform well on the vaguely similar
GMAT, especially since as a rule colleges don't teach the material on the GMAT.
Parnassus (See Below)
"Which College Scores Best on the GMAT?" by
Geoff Gloeckler, Bloomberg Business Week, July 12, 2011 ---
http://www.businessweek.com/bschools/blogs/mba_admissions/archives/2011/07/which_college_scores_best_on_the_gmat.html
A few weeks ago
we were discussing the correlation between undergraduate institution
and GMAT scores. We knew which B-schools boast the highest scoring
MBA students (Stanford,
Yale). What we didn’t know is which undergraduate institutions
produce grads who fare the best on the test. It was a statistic none
of us had seen before.
Thanks to the
mountain of data we collect in our various ranking
projects—specifically the graduate surveys from the
MBA Class of 2010—we had the information
necessary to find the answer. So here it is: By and large, the
elite, private institutions fare the best, with Harvard (738.0 GMAT
average), Yale (732.0), and MIT (731.7) leading the way.
In fact, of the 30
universities whose grads average a 700 or higher on the test, only
three—UC Berkeley (711.1), University of Washington (707.5), and
UCLA (707.2)—are public schools.
We started with
about 200 schools then removed those with fewer than 12 grads in the
sample. This left a total of 107 universities, with scores ranging
from 738 at Harvard to 633 at Louisiana State. The average score,
overall, was 686. The average number of respondents for each school
was 39.
Obviously, for MBA
applicants who have already earned their undergraduate degree, this
information isn't of much value, but for high school juniors and
seniors who see an MBA in their futures, this list might be
something to take into consideration.
(Note: Scores are
not limited to students who graduated with an undergraduate degree
in business.)
Here's the top 30:
1. Harvard
738.0
2. Yale 732.0
3. MIT 731.7
4. Rice University 731.3
5. Brandeis University 729.4
6. Princeton 727.7
7. Stanford University 724.0
8. Brown University 722.2
9. Williams College 721.6
10. Carnegie Mellon 720.9
11. Duke University 720.2
12. Dartmouth 716.7
13. Wesleyan University 716.2
14. Amherst College 714.4
15. Carleton College 714.2
16. University of Chicago 712.9
17. Columbia University 712.2
18. University of Pennsylvania 712.2
19. Northwestern 712.0
20. UC Berkeley 711.1
21. Claremont McKenna 708.6
22. Middlebury College 707.6
23. University of Washington 707.5
24. UCLA 707.2
25. University of Notre Dame 702.5
26. Cornell University 702.0
27. Davidson College 701.5
28. Southern California 701.0
29. Johns Hopkins 700.8
30. Bowdoin College 700.5
Jensen Comment
Interestingly, the GMAT testing service was one of the very first services to
use computers to grade essay questions ---
http://faculty.trinity.edu/rjensen/assess.htm#ComputerBasedAssessment
Bob Jensen's threads on ranking controversies
---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
Question
What factors most heavily influence student performance and desire to take more
courses in a given discipline?
Answer
These outcomes are too complex to be predicted very well. Sex and age of
instructors have almost no impact. Teaching evaluations have a very slight
impact, but there are just too many complexities to find dominant factors
cutting across a majority of students.
Oreopoulos said the findings bolster a conclusion he
came to in a previous academic paper that subjective qualities, such as how a
professor fares on student evaluations, tell you more about how well students
will perform and how likely they are to stay in a given course than do
observable traits such as age or gender. (He points out, though, that even the
subjective qualities aren’t strong indicators of student success.) “If I were
concerned about improving teaching, I would focus on hiring teachers who perform
well on evaluations rather than focus on age or gender,” he said.
Elia Powers, "Faculty Gender and Student Performance," Inside Higher Ed,
June 21, 2007 ---
http://www.insidehighered.com/news/2007/06/21/gender
Jensen Comment
A problem with increased reliance on teaching evaluations to measure performance
of instructors is that this, in turn, tends to grade inflation ---
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
Question
What parts of a high school curriculum are the best predictors of success as a
science major in college?
New research by professors at Harvard University
and the University of Virginia has found that no single high school science
course has an impact beyond that type of science, when it comes to predicting
success in college science. However, the researchers found that a rigorous
mathematics curriculum in high school has a significant impact on performance in
college science courses. The research, which will be published in Science, runs
counter to the “physics first” movement in which some educators have been
advocating that physics come before biology and chemistry in the high school
curriculum. The study was based on analysis of a broad pool of college students,
their high school course patterns, and their performance in college
science.
Inside Higher Ed, July 27, 2007 ---
http://www.insidehighered.com/news/2007/07/27/qt
Jensen Comment
Now we have this when some colleges are trying to promote applications and
admissions by dropping the SAT testing requirements for admission. In Texas, the
Top 10% of any state high school class do not have to even take the SAT for
admission to any state university in Texas. Of course high schools may still
have a rigorous mathematics curriculum, but what high school student aiming for
the 10% rule is going to take any rigorous course that is not required for high
school graduation? The problem is that rigorous elective courses carry a higher
risk of lowering the all-important grade point average.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"NLJ: The Impact of Higher Bar Passage Requirements on Law
School Diversity," by Paul Caron, TaxProf Blog, July 18, 2011 ---
http://taxprof.typepad.com/
NLJ: The Impact of Higher Bar Passage Requirements on Law School
Diversity
National Law Journal,
ABA Again Confronts the Diversity Dilemma: A Renewed Push for Higher Law
School Standards Has its Downside, by Karen Sloan:
[The ABA] is trying to reconcile the legal
profession's need for greater diversity with its desire to push law
schools to better prepare students to pass the bar. For the second time
in four years, it is considering raising the minimum bar-passage-rate
requirement as part of a comprehensive review of law school
accreditation standards. ...
The hope is that higher standards would push
schools with lower passage rates to invest more in academic support and
bar preparation. ... They also would serve a consumer-protection
function, assuring law students a reasonable expectation of passing the
bar.
The ABA has already signaled that it takes
bar-passage rates seriously. It
revoked provisional accreditation from the University of La Verne
College of Law in Ontario, Calif., in June
because of the school's low bar-passage rates. In 2009, a scant 34% of
La Verne students passed the California bar examination on the first
try, and the school's first-time bar-passage rate was 53% in 2010 —
improved, but still not good enough, according to the ABA.
Applying a bright-line bar-passage standard is
a fairly new idea for the ABA. Before 2008, the ABA spelled out no
specific bar-passage minimum. Instead, it enforced what was called the
"70/10 Rule": At least 70% of the school's first-time bar takers had to
pass the exam in the school's home state. In the alternative, the
first-time bar-pass rate could be no lower than 10% below the average of
other ABA-accredited schools in that state.
The U.S. Department of Education, which has
authorized the ABA to be the national accreditor of law schools, asked
for a clearer standard in 2007. After protracted wrangling, the ABA
adopted a requirement that at least 75% of a law school's graduates pass
the bar exam in at least three of the past five years. Schools can also
meet the standard if their first-time bar-passage rate is no more than
15% below other ABA schools in the same state during three of the past
five years. The 15% requirement is intended to level the playing field
across states, given that passage rates vary widely depending on
jurisdiction. The outcome was a compromise, representing a minimum
standard higher than what diversity advocates wanted but lower than the
initial proposal. ...
The new proposal would require that at least
80% of graduates pass the bar in three of the past five years, or that
first-time bar-passage rates be no more than 10% below other schools in
the same state — bringing the standards closer to the test used before
2008.
Turkey Times for Overstuffed Law Schools ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#OverstuffedLawSchools
Grades are even worse than tests as predictors of success
"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside
Higher Ed, July 27, 2007 ---
http://www.insidehighered.com/views/2007/07/27/sedlacek
Grades and test scores have worked well as the
prime criteria to evaluate applicants for admission, haven’t they? No!
You’ve probably heard people say that over and over again, and figured that
if the admissions experts believe it, you shouldn’t question them. But that
long held conventional wisdom just isn’t true. Whatever value tests and
grades have had in the past has been severely diminished. There are many
reasons for this conclusion, including greater diversity among applicants by
race, gender, sexual orientation and other dimensions that interact with
career interests. Predicting success with so much variety among applicants
with grades and test scores asks too much of those previous stalwarts of
selection. They were never intended to carry such a heavy expectation and
they just can’t do the job anymore, even if they once did. Another reason is
purely statistical. We have had about 100 years to figure out how to measure
verbal and quantitative skills better but we just can’t do it.
Grades
are even worse than tests as predictors of success.
The major reason is
grade inflation. Everyone
is getting higher grades these days, including those in high
school, college, graduate, and professional school. Students
are bunching up at the top of the grade distribution and we
can’t distinguish among them in selecting who would make the
best student at the next level.
We need a fresh approach. It is not good enough to feel
constrained by the limitations of our current ways of
conceiving of tests and grades. Instead of asking; “How can
we make the SAT and other such tests better?” or “How can we
adjust grades to make them better predictors of success?” we
need to ask; “What kinds of measures will meet our needs now
and in the future?” We do not need to ignore our current
tests and grades, we need to add some new measures that
expand the potential we can derive from assessment.
We appear to
have forgotten why tests were created in the first place.
While they were always considered to be useful in evaluating
candidates, they were also considered to be more equitable
than using prior grades because of the variation in quality
among high schools.
Test results
should be useful to educators — whether involved in
academics or student services — by providing the basis to
help students learn better and to analyze their needs. As
currently designed, tests do not accomplish these
objectives. How many of you have ever heard a colleague say
“I can better educate my students because I know their SAT
scores”? We need some things from our tests that currently
we are not getting. We need tests that are fair to all and
provide a good assessment of the developmental and learning
needs of students, while being useful in selecting
outstanding applicants. Our current tests don’t do that.
The rallying
cry of “all for one and one for all” is one that is used
often in developing what are thought of as fair and
equitable measures. Commonly, the interpretation of how to
handle diversity is to hone and fine-tune tests so they are
work equally well for everyone (or at least to try to do
that). However, if different groups have different
experiences and varied ways of presenting their attributes
and abilities, it is unlikely that one could develop a
single measure, scale, test item etc. that could yield
equally valid scores for all. If we concentrate on results
rather than intentions, we could conclude that it is
important to do an equally good job of selection for each
group, not that we need to use the same measures for all to
accomplish that goal. Equality of results, not process is
most important.
Therefore,
we should seek to retain the variance due to culture, race,
gender, and other aspects of non-traditionality that may
exist across diverse groups in our measures, rather than
attempt to eliminate it. I define non-traditional persons as
those with cultural experiences different from those of
white middle-class males of European descent; those with
less power to control their lives; and those who experience
discrimination in the United States.
While
the term “noncognitive” appears to be precise and
“scientific” sounding, it has been used to describe a wide
variety of attributes. Mostly it has been defined as
something other than grades and test scores, including
activities, school honors, personal statements, student
involvement etc. In many cases those espousing noncognitive
variables have confused a method (e.g. letters of
recommendation) with what variable is being measured. One
can look for many different things in a letter.
Robert Sternberg’s system of
viewing intelligence provides a model, but is important to
know what sorts of abilities are being assessed and that
those attributes are not just proxies for verbal and
quantitative test scores. Noncognitive variables appear to
be in Sternberg’s experiential and contextual domains, while
standardized tests tend to reflect the componential domain.
Noncognitive variables are useful for all students, they are
particularly critical for non-traditional students, since
standardized tests and prior grades may provide only a
limited view of their potential.
I and
my colleagues and students have developed a system of
noncognitive variables that has worked well in many
situations. The eight variables in the system are
self-concept, realistic self-appraisal, handling the system
(racism), long range goals, strong support person,
community, leadership, and nontraditional knowledge.
Measures of these dimensions are available at no cost in a
variety of articles and in a book,
Beyond the Big Test.
This
Web site has previously featured how
Oregon State University has used a
version of this system very successfully in increasing their
diversity and student success. Aside from increased
retention of students, better referrals for student services
have been experienced at Oregon State. The system has also
been employed in selecting Gates Millennium Scholars. This
program, funded by the Bill & Melinda Gates Foundation,
provides full scholarships to undergraduate and graduate
students of color from low-income families. The SAT scores
of those not selected for scholarships were somewhat higher
than those selected. To date this program has provided
scholarships to more than 10,000 students attending more
than 1,300 different colleges and universities. Their
college GPAs are about 3.25, with five year retention rates
of 87.5 percent and five year graduation rates of 77.5
percent, while attending some of the most selective colleges
in the country. About two thirds are majoring in science and
engineering.
The
Washington State Achievers program
has also employed the noncognitive variable system discussed
above in identifying students from certain high schools that
have received assistance from an intensive school reform
program also funded by the Bill & Melinda Gates Foundation.
More than 40 percent of the students in this program are
white, and overall the students in the program are enrolling
in colleges and universities in the state and are doing
well. The program provides high school and college mentors
for students. The
College Success Foundation is
introducing a similar program in Washington, D.C., using the
noncognitive variables my colleagues and I have developed.
Recent
articles in this publication have discussed programs at the
Educational Testing Service for
graduate students and
Tufts University for
undergraduates that have incorporated noncognitive
variables. While I applaud the efforts for reasons I have
discussed here, there are questions I would ask of each
program. What variables are you assessing in the program? Do
the variables reflect diversity conceptually? What evidence
do you have that the variables assessed correlate with
student success? Are the evaluators of the applications
trained to understand how individuals from varied
backgrounds may present their attributes differently? Have
the programs used the research available on noncognitive
variables in developing their systems? How well are the
individuals selected doing in school compared to those
rejected or those selected using another system? What are
the costs to the applicants? If there are increased costs to
applicants, why are they not covered by ETS or Tufts?
Until these
and related questions are answered these two programs seem
like interesting ideas worth watching. In the meantime we
can learn from the programs described above that have been
successful in employing noncognitive variables. It is
important for educators to resist half measures and to
confront fully the many flaws of the traditional ways higher
education has evaluated applicants.
A different way to think about assessment
January 26, 2007 message from Carnegie President
[carnegiepresident@carnegiefoundation.org]
A different way to think about ... assessment In
the most recent issue of
Change magazine,
I join several other authors to examine higher education's ongoing
responsibility to tell the story of student learning with care and
precision. Fulfilling this responsibility at the institutional level
requires ongoing deliberations among colleagues and stakeholders about the
specific learning goals we seek and the broad educational purposes we
espouse. What will motivate such discussions?
In this month's Carnegie Perspectives,
Lloyd Bond makes a strong case for the use of
common examinations as a powerful form of assessment as well as a fruitful
context for faculty deliberations about their goals for students. Using an
institutional example from the
Carnegie/Hewlett project on strengthening teaching
and learning at community colleges, Lloyd describes a particular example of
this principle and how it supports faculty communication and student
learning.
Carnegie has created a forum—Carnegie
Conversations—where you can engage publicly with Lloyd and read and respond
to what others have to say about this article at
http://www.carnegiefoundation.org/perspectives/january2007
Or you may respond to the author privately through
CarnegiePresident@carnegiefoundation.org
We look forward to hearing from you.
Sincerely,
Lee S. Shulman
President The Carnegie Foundation for the Advancement of Teaching
International Journal for the Scholarship of Teaching and Learning ---
http://www.georgiasouthern.edu/ijsotl/
Just-In-Time Teaching ---
http://134.68.135.1/jitt/
Just-in-Time Teaching (JiTT for short) is a
teaching and learning strategy based on the interaction between web-based
study assignments and an active learner classroom. Students respond
electronically to carefully constructed web-based assignments which are due
shortly before class, and the instructor reads the student submissions
"just-in-time" to adjust the classroom lesson to suit the students' needs.
Thus, the heart of JiTT is the "feedback loop" formed by the students'
outside-of-class preparation that fundamentally affects what happens during
the subsequent in-class time together.
What is Just-in-Time Teaching designed to
accomplish?
JiTT is aimed at many of the challenges facing
students and instructors in today's classrooms. Student populations are
diversifying. In addition to the traditional nineteen-year-old recent high
school graduates, we now have a kaleidoscope of "non-traditional" students:
older students, working part time students, commuting students, and, at the
service academies, military cadets. They come to our courses with a broad
spectrum of educational backgrounds, interests, perspectives, and
capabilities that compel individualized, tailored instruction. They need
motivation and encouragement to persevere. Consistent, friendly support can
make the difference between a successful experience and a fruitless effort.
It can even mean the difference between graduating and dropping out.
Education research has made us more aware of learning style differences and
of the importance of passing some control of the learning process over to
the students. Active learner environments yield better results but they are
harder to manage than lecture oriented approaches. Three of the
"Seven
Principles for Good Practice in Undergraduate Education" encourage
student-faculty contact, increased time for student study, and cooperative
learning between students.
To confront these challenges, the Just-in-Time Teaching strategy pursues
three major goals:
- 1. To maximize the efficacy of the
classroom session, where human instructors are present.
- 2. To structure the out-of-class time
for maximum learning benefit.
- 3. To create and sustain team spirit.
Students and instructors work as a team toward the same objective, to
help all students pass the course with the maximum amount of retainable
knowledge.
What JiTT is Not
Although Just-in-Time Teaching makes heavy use
of the web, it is not to be confused with either distance learning (DL) or
with computer-aided instruction (CAI). Virtually all JiTT instruction occurs
in a classroom with human instructors. The web materials, added as a
pedagogical resource, act primarily as a communication tool and secondarily
as content provider and organizer. JiTT is also not an attempt to 'process'
large numbers of students by employing computers to do massive grading jobs.
The JiTT Feedback Loop
The Web Component
JiTT web pages fall into three major
categories:
- 1. Student assignments in preparation
for the classroom activity: WarmUps and Puzzles.
- 2. Enrichment pages. Short essays on
practical, everyday applications of the course subject matter, peppered
with URLs to interesting material on the web. These essays have proven
themselves to be an important motivating factor in introductory service
courses, where students often doubt the current relevance the subject.
- 3. Stand alone instructional material,
such as simulation programs and spreadsheet exercises.
For detailed examples of the JiTT web
resources, please see the
JiTT
resources page.
WarmUps and Puzzles are the heart of the
JiTT web component. These are short, web-based assignments, prompting
the student to think about the upcoming lesson and answer a few
simple questions prior to class. These questions, when fully discussed,
often have complex answers. The students are expected to develop the
answer as far as they can on their own. We finish the job in the
classroom. These assignments are due just a few hours before class time.
The responses are delivered to the instructor electronically to form the
framework for the classroom activities that follow. Typically, the
instructors duplicates sample responses on transparencies and takes them
to class. The interactive classroom session, built around these
responses, replaces the traditional lecture/recitation format.
Students complete the WarmUp assignments before they receive any formal
instruction on a particular topic. They earn credit for answering a
question, substantiated by prior knowledge and whatever they managed to
glean from the textbook. The answers do not have to be complete, or even
correct. In fact, partially correct responses are particularly useful as
classroom discussion fodder. In contrast to WarmUps, Puzzle exercises
are assigned to students after they have received formal instruction on
a particular topic. The Puzzles serve as the framework for a wrap-up
session on a particular topic.
The WarmUps, and to some extent the Puzzles, are undergirded by
education research and target a variety of specific issues. The list of
targeted issues might contain: developing concepts and vocabulary,
modeling -- connecting concepts and equations, estimation- getting a
feel for magnitudes, relating technical scientific statements to "common
sense", understanding the scope of applicability of equations, etc. The
targeted issues are highly content specific. They may involve the
characteristics of a particular class (e.g. the background skills of a
particular student body).
In preparing WarmUp assignments for an upcoming class meeting, we first
create a conceptual outline of the lesson content. This task is similar
to the preparation of a traditional passive lecture. As we work on the
outline, we pay attention to the pedagogical issues that we need to
focus on when in the classroom. Are we introducing new concepts and/or
new notation? Are we building on a previous lesson, and if so, what
bears repeating? What are the important points we wish the students to
remember from the session? What are the common difficulties typical
students will face when exposed to this material? (Previous classroom
experience and teaching and learning literature can be immensely helpful
here). Once this outline has been created, we create broadly based
questions that will force students to grapple with as many of the issues
as possible. We are hoping to receive, in the student responses, the
framework on which we build the in-class experience.
The Active Learner Classroom
The JiTT classroom session is intimately linked to the electronic
preparatory assignments the students complete outside of class. Exactly
how the classroom time is spent depends on a variety of issues such as
class size, classroom facilities, and student and instructor
personalities. Mini-lectures (10 min max) are often interspersed with
demos, classroom discussion, worksheet exercises, and even hands-on
mini-labs. Regardless, the common key is that the classroom component,
whether interactive lecture or student activities, is informed by an
analysis of various student responses.
In a JiTT classroom students construct the same content as in a passive
lecture with two important added benefits. First, having completed the
web assignment very recently, they enter the classroom ready to actively
engage in the activities. Secondly, they have a feeling of ownership
since the interactive lesson is based on their own wording and
understanding of the relevant issues.
The give and take in the classroom suggests future WarmUp questions that
will reflect the mood and the level of expertise in the class at hand.
In this way the feedback loop is closed with the students having played
a major part in the endeavor.
From the instructor's point of view, the lesson content remains pretty
much the same from semester to semester with only minor shifts in
emphasis. From the students' perspective, however, the lessons are
always fresh and interesting, with a lot of input from the class.
We designed JiTT to improve student learning in our own classrooms and
have been encouraged by the results, both attitudinal and cognitive. We
attribute this success to three factors that enhance student learning,
identified by Alexander Astin* in his thirty year study of
college student success:
- increased amounts and quality of student-student interaction
- student-faculty interaction
- student study outside of class.
By fostering these, JiTT promotes student learning and satisfaction.
*Astin, Alexander:
What
matters in college? Four critical years revisited (San Francisco,
CA: Jossey-Bass Publishers, 1993).
Bob Jensen's threads on tools and tricks of the trade are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
What works in education?
Perhaps Colleges Should Think About This
"School Ups Grade by Going Online," by Cyrus Farivar, Wired News,
October 12, 2004 --- http://www.wired.com/news/culture/0,1284,65266,00.html?tw=newsletter_topstories_html
Until last year, Walt Whitman Middle School 246 in
Brooklyn was considered a failing school by the state of New York.
But with the help of a program called HIPSchools
that uses rapid communication between parents and teachers through e-mail and
voice mail, M.S.
246 has had a dramatic turnaround. The premise behind "HIP"
comes from Keys Technology Group's mission of "helping involve
parents."
The school has seen distinct improvement in the
performance of its 1300 students, as well as regular attendance, which has
risen to 98 percent (an increase of over 10 percent) in the last two years
according to Georgine Brown-Thompson, academic intervention services
coordinator at M.S. 246.
Continued in the article
RAND Corporation: Measuring Teacher Effectiveness ---
http://www.rand.org/education/projects/measuring-teacher-effectiveness.html
Explore the Measuring Teacher Effectiveness Fact Sheet Series Teachers
Matter: Understanding Teachers' Impact on Student Achievement
Research suggests that, among school-related factors, teachers matter
most when it comes to a student's academic performance. Nonschool
factors do influence student achievement, but effective teaching has the
potential to help level the playing field.
Multiple Choices: Options for Measuring Teaching Effectiveness
Teaching is a complex activity that should be measured with multiple
methods. Some examine teachers' practices directly, while others
emphasize student outcomes. Each method has trade-offs, and no single
method provides a complete picture of a teacher's effectiveness.
Tests and the Teacher: What Student Achievement Tests Do—and Don't—Tell
Us About Teacher Effectiveness
In addition to helping students learn reading and math, we also trust
teachers to teach students to think, reason, and work cooperatively with
one another. Students' scores on achievement tests tell us something—but
by no means everything—about how well teachers are meeting these
expectations.
Value-Added Modeling 101: Using Student Test Scores to Help Measure
Teaching Effectiveness
Value-added models, or VAMs, attempt to measure a teacher's impact on
student achievement apart from other factors, such as individual
ability, family environment, past schooling, and the influence of peers.
Value-added estimates enable relative judgments but are not absolute
indicators of effectiveness.
Student Growth Percentiles 101: Using Relative Ranks in Student Test
Scores to Help Measure Teaching Effectiveness
Student growth percentiles, or SGPs, provide a simple way of
comparing the improvement of one teacher's students at the end of the
year with the improvement of other students who started the year at the
same level.
September 2, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu]
"CONSUMER REPORTS" FOR RESEARCH IN
EDUCATION
The What Works Clearinghouse was established in 2002
by the U.S. Department of Education's Institute of Education Sciences with
$18.5 million in funding to "provide educators, policymakers,
researchers, and the public with a central and trusted source of scientific
evidence of what works in education." The Clearinghouse reviews,
according to relevance and validity, the "effectiveness of replicable
educational interventions (programs, products, practices, and policies) that
intend to improve student outcomes." This summer, the Clearinghouse
released two of its planned reports: peer-assisted learning interventions and
middle school math curricula. For more information about the What Works
Clearinghouse and descriptions of all topics to be evaluated, go to http://www.w-w-c.org/
See also:
"'What Works' Research Site Unveiled" by
Debra Viadero EDUCATION WEEK, vol. 23, no. 42, pp. 1, 33, July 14, 2004 http://www.edweek.org/ew/ew_printstory.cfm?slug=42Whatworks.h23
"'What Works' Site Opens Dialogue on
Research" Letter to Editor from Talbot Bielefeldt, Center for Applied
Research in Educational Technology, International Society for Technology in
Education EDUCATION WEEK, vol. 23, no. 44, p. 44, August 11, 2004 http://www.edweek.org/ew/ew_printstory.cfm?slug=44Letter.h23
April 1, 2005 message from Carolyn Kotlas
[kotlas@email.unc.edu]
NEW EDUCAUSE E-BOOK ON THE NET GENERATION
EDUCATING THE NET GENERATION, a new EDUCAUSE
e-book of essays edited by Diana G. Oblinger and James L. Oblinger,
"explores the Net Gen and the implications for institutions in areas such as
teaching, service, learning space design, faculty development, and
curriculum." Essays include: "Technology and Learning Expectations of the
Net Generation;" "Using Technology as a Learning Tool, Not Just the Cool New
Thing;" "Curricula Designed to Meet 21st-Century Expectations;" "Faculty
Development for the Net Generation;" and "Net Generation Students and
Libraries." The entire book is available online at no cost at
http://www.educause.edu/educatingthenetgen/
.
EDUCAUSE is a nonprofit association whose mission
is to advance higher education by promoting the intelligent use of
information technology. For more information, contact: Educause, 4772 Walnut
Street, Suite 206, Boulder, CO 80301-2538 USA; tel: 303-449-4430; fax:
303-440-0461; email:
info@educause.edu; Web:
http://www.educause.edu/
See also:
GROWING UP DIGITAL: THE RISE OF THE NET GENERATION
by Don Tapscott McGraw-Hill, 1999; ISBN: 0-07-063361-4
http://www.growingupdigital.com/
EFFECTIVE E-LEARNING DESIGN
"The unpredictability of the student context and
the mediated relationship with the student require careful attention by the
educational designer to details which might otherwise be managed by the
teacher at the time of instruction." In "Elements of Effective e-Learning
Design" (INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTANCE LEARNING,
March 2005) Andrew R. Brown and Bradley D. Voltz cover six elements of
effective design that can help create effective e-learning delivery. Drawing
upon examples from The Le@rning Federation, an initiative of state and
federal governments of Australia and New Zealand, they discuss lesson
planning, instructional design, creative writing, and software
specification. The paper is available online at
http://www.irrodl.org/content/v6.1/brown_voltz.html
International Review of Research in Open and
Distance Learning (IRRODL) [ISSN 1492-3831] is a free, refereed ejournal
published by Athabasca University - Canada's Open University. For more
information, contact Paula Smith, IRRODL Managing Editor; tel: 780-675-6810;
fax: 780-675-672; email:
irrodl@athabascau.ca
; Web:
http://www.irrodl.org/
The Le@rning Federation (TLF) is an "initiative
designed to create online curriculum materials and the necessary
infrastructure to ensure that teachers and students in Australia and New
Zealand can use these materials to widen and enhance their learning
experiences in the classroom." For more information, see
http://www.thelearningfederation.edu.au/
COMPUTERS IN THE CLASSROOM AND OPEN BOOK EXAMS
In "PCs in the Classroom & Open Book Exams" (UBIQUITY, vol. 6, issue 9,
March 15-22, 2005), Evan Golub asks and supplies some answers to questions
regarding open-book/open-note exams. When classroom computer use is allowed
and encouraged, how can instructors secure the open-book exam environment?
How can cheating be minimized when students are allowed Internet access
during open-book exams? Golub's suggested solutions are available online at
http://www.acm.org/ubiquity/views/v6i9_golub.html
May 5, 2005 message from Carolyn Kotlas
[kotlas@email.unc.edu]
TEACHING, TEACHING TECHNOLOGIES, AND VIEWS OF
KNOWLEDGE
In "Teaching as Performance in the Electronic
Classroom" (FIRST MONDAY, vol. 10, no. 4, April 2005), Doug Brent, professor
in the Faculty of Communication and Culture at the University of Calgary,
presents two views of teaching: teaching as a "performance" and teaching as
a transfer of knowledge through text, a "thing." He discusses the social
groups that have stakes in each view and how teaching will be affected by
the view and group that gains primacy. "If the group that values teaching as
performance has the most influence, we will put more energy into developing
flexible courseware that promotes social engagement and interaction. . . .
If the group that sees teaching as textual [i.e., a thing] has the most
influence, we will develop more elaborate technologies for delivering
courses as online texts, emphasising the role of the student as audience
rather than as participant." Brent's paper is available online at
http://firstmonday.org/issues/issue10_4/brent/index.html .
First Monday [ISSN 1396-0466] is an online,
peer-reviewed journal whose aim is to publish original articles about the
Internet and the global information infrastructure. It is published in
cooperation with the University Library, University of Illinois at Chicago.
For more information, contact: First Monday, c/o Edward Valauskas, Chief
Editor, PO Box 87636, Chicago IL 60680-0636 USA; email: ejv@uic.edu; Web:
http://firstmonday.dk/.
......................................................................
LAPTOPS IN THE CLASSROOM
The theme for the latest issue of NEW DIRECTIONS
FOR TEACHING AND LEARNING (vol. 2005, issue 101, Spring 2005) is "Enhancing
Learning with Laptops in the Classroom." Centered on the faculty development
program at Clemson University, the issue's purpose is "to show that
university instructors can and do make pedagogically productive and novel
use of laptops in the classroom" and "to advise institutional leaders on how
to make a laptop mandate successful at their university." The publication is
available online
http://www3.interscience.wiley.com/cgi-bin/jhome/86011233 .
New Directions for Teaching and Learning [ISSN:
0271-0633], a quarterly journal published by Wiley InterScience, offers a
"comprehensive range of ideas and techniques for improving college teaching
based on the experience of seasoned instructors and on the latest findings
of educational and psychological researchers." The journal is available both
in print and online formats.
......................................................................
NEW E-JOURNAL ON LEARNING AND EVALUATION
STUDIES IN LEARNING, EVALUATION, INNOVATION AND
DEVELOPMENT is a new peer-reviewed electronic journal that "supports
emerging scholars and the development of evidence-based practice and that
publishes research and scholarship about teaching and learning in formal,
semi-formal and informal educational settings and sites." Papers in the
current issue include:
"Can Students Improve Performance by Clicking More?
Engaging Students Through Online Delivery" by Jenny Kofoed
"Managing Learner Interactivity: A Precursor to
Knowledge Exchange" by Ken Purnell, Jim Callan, Greg Whymark and Anna
Gralton
"Online Learning Predicates Teamwork: Collaboration
Underscores Student Engagement" by Greg Whymark, Jim Callan and Ken Purnell
Studies in Learning, Evaluation, Innovation and
Development [ISSN 1832-2050] will be published at least once a year by the
LEID (Learning, Evaluation, Innovation and Development) Centre, Division of
Teaching and Learning Services, Central Queensland University, Rockhampton,
Queensland 4702 Australia. For more information contact: Patrick Danaher,
tel: +61-7-49306417; email: p.danaher@cqu.edu.au. Current and back issues
are available at
http://www.sleid.cqu.edu.au/index.php .
Bob Jensen's threads on education resources are at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm#Resources
Bob Jensen's threads on assessment are at http://faculty.trinity.edu/rjensen/assess.htm
Assessment often gets caught in a tug of war between
accountability and improvement.
The Next Great Hope for Measuring Learning ---
http://www.chronicle.com/article/The-Next-Great-Hope-for/238075?cid=at&utm_source=at&utm_medium=en&elqTrackId=49382afe872f46a0b64064c090db9e53&elq=152fd248a4d244b6a1dfcf39b37cbd7c&elqaid=11117&elqat=1&elqCampaignId=4277
Jensen Comment
When it comes to assessment I tend to think of how I want my brain surgeon to be
assessed before he sticks something hard and sharp into my gray matter. I guess
the accountant in me leans toward accountability
Mathematics Assessment Project (learning assessment) ---http://map.mathshell.org
September 2, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu]
SURVEY ON QUALITY AND EXTENT OF ONLINE EDUCATION
The Sloan Consortium's 2003 Survey of Online Learning
wanted to know would students, faculty, and institutions embrace online
education as a delivery method and would the quality of online education match
that of face-to-face instruction. The survey found strong evidence that
students are willing to sign up for online courses and that institutions
consider online courses part of a "critical long-term strategy for their
institution." It is less clear that faculty have embraced online teaching
with the same degree of enthusiasm. The survey's findings are available in
"Sizing the Opportunity: The Quality & Extent of Online Education in
the U.S., 2002 and 2003" by I. Elaine Allen and Jeff Seaman, Sloan Center
for Online Education at Olin and Babson Colleges. The complete report is
online at http://www.sloan-c.org/resources/sizing_opportunity.pdf
The Sloan Consortium (Sloan-C) is a consortium of
institutions and organizations committed "to help learning organizations
continually improve quality, scale, and breadth of their online programs
according to their own distinctive missions, so that education will become a
part of everyday life, accessible and affordable for anyone, anywhere, at any
time, in a wide variety of disciplines." Sloan-C is funded by the Alfred
P. Sloan Foundation. For more information, see http://www.sloan-c.org/
Bob Jensen's threads on the dark side of distance education are at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Computer-Based Assessment
Free Book on Assessment in Europe (with particular focus in computer-based
assessment) ---
http://crell.jrc.it/RP/reporttransition.pdf
The Transition to Computer-Based Assessment; New Approaches to Skills
Assessment and Implications for Large-scale Testing
by Friedrich Scheuermann & Julius Björnsson (Eds.)
European Commission Joint Research Centre
Institute for the Protection and Security of the Citizen Contact information
Address: Unit G09, CRELL TP-361, Via Enrico Fermi, 2749; 21027 Ispra (VA), Italy
E-mail: friedrich.scheuermann@jrc.it
Tel.: +39-0332-78.6111
Fax: +39-0332-78.5733
Web http://ipsc.jrc.ec.europa.eu/
http://www.jrc.ec.europa.eu/
Table of Contents ---
http://crell.jrc.it/RP/reporttransition.pdf
Introduction
6
PART I: ASSESSMENT NEEDS AND EUROPEAN APPROACHES
Assessing and Teaching 21st Century Skills Assessment
Call to Action
13
Robert Kozma
The European Coherent Framework of Indicators and Benchmarks and
Implications for Computer-based Assessment 24
Oyvind Bjerkestrand
Computer-based Assessment and the Measurement of Creativity in Education
29
Ernesto Villalba
PART II: GENERAL ISSUES OF COMPUTER-BASED
TESTING
Experiences from Large-Scale Computer-Based
Testing in the USA 39
Brent Bridgeman
National Tests in Denmark – CAT as a Pedagogic Tool 45
Jakob Wandall
Introducing Large-scale Computerized Assessment – Lessons Learned and
Future Challenges 51
Eli Moe
Large-scale Computer-based Testing of Foreign Language Competences across
Europe: Technical Requirements and Implementation 57
Jostein Ryssevik
Delivery Platforms for National and International Computer-based Surveys
63
Sam Haldane eInclusion, eAccessibility and Design-for-All Issues
in the Context of European Computer-based Assessment 68 Klaus Reich &
Christian Petter Gender differences in cognitive tests: a consequence
of gender dependent preferences for specific information presentation
formats? 75 Romain Martin & Marilyn Binkley
PART III: TRANSITION FROM PAPER-AND-PENCIL
TO COMPUTER-BASED TESTING
Risks and Benefits of CBT versus PBT in
High-Stakes Testing 83
Gerben van Lent Transformational Computer-based Testing 92
Martin Ripley 5
Reflections on Paper-and-Pencil Tests to eAssessments: Narrow and
Broadband Paths to 21st Century Challenges 99
Katherina Kikis
Transition to Computer-based Assessment:
Motivations and Considerations 104
René Meijer
Transitioning to Computer-Based
Assessments: A Question of Costs 108
Matthieu Farcot & Thibaud Latour
Shifting from Paper-and-Pencil to Computer-based Testing: Requisites,
Challenges and Consequences for Testing Outcomes - A Croatian Perspective
117
Vesna Busko
Comparing Paper-and-Pencil and Online Assessment of Reasoning Skills: A
Pilot Study for Introducing TAO in Large-scale Assessment in Hungary 120
Benő Csapó, Gyöngyvér Molnár & Krisztina R. Tóth
PART IV: METHODOLOGIES OF COMPUTER-BASED
TESTING
Computerized and Adaptive Testing in
Educational Assessment 127
Nathan A. Thompson & David J. Weiss
Computerized Adaptive Testing of Arithmetic at the Entrance of Primary
School Training College (WISCAT-pabo) 134
Theo J.H.M. Eggen & Gerard J.J.M. Straetmans
Issues in Computerized Ability Measurement: Getting out of the Jingle and
Jangle Jungle 145
Oliver Wilhelm
New Constructs, Methods, & Directions for Computer-Based Assessment
151
Patrick C. Kyllonen
Measuring Complex Problem Solving: The MicroDYN Approach 157
Samuel Greiff & Joachim Funke
Testing for Equivalence of Test Data across Media 164
Ulrich Schroeders
PART V: THE PISA 2006 COMPUTER-BASED
ASSESSMENT OF SCIENCE (CBAS)
Utilising the Potential of Computer Delivered
Surveys in Assessing Scientific Literacy
172
Ron Martin
Are Icelandic Boys really better on
Computerized Tests than Conventional ones? Interaction between Gender, Test
Modality and Test Performance 178
Almar M. Halldórsson, Pippa McKelvie & Júlíus K. Björnsson
CBAS in Korea: Experiences, Results and Challenges 194
Mee-Kyeong Lee
How did Danish Students solve the PISA CBAS items? Right and Wrong
Answers from a Gender Perspective 201
Helene Sørensen & Annemarie Møller Andersen
Computer Grading of Essays
Jensen Question
Will students and faculty be totally out of the loop when computers write the
questions, computers write the answers, and computers grade the answers?
"22 Thoughts on Automated Grading of Student Writing," by John Warner,
Inside Higher Ed, April 10, 2013 ---
http://www.insidehighered.com/blogs/just-visiting/22-thoughts-automated-grading-student-writing
EdX, the online learning consortium of Harvard and M.I.T., believes it is
close to a workable model for the automated grading of student writing.
According to Dr. Anant Argawal, President of EdX,
“This is machine learning and there is a long way to go, but it’s good
enough and the upside is huge. We found that the quality of the grading is
similar to the variation you find from instructor to instructor.”
Since this news was released last week, I’ve been
trying to respond in a coherent, essay-like piece of writing that ties
together various thoughts and ideas into a cohesive and satisfying whole.
I’m giving up on that. This is the Internet, right?
I’ve made a list.
22 Thoughts on the News that Automated
Grading has Arrived
1. I’m willing to stipulate that if not today, very
soon, software will be developed that can assign a numerical grade to
student writing that largely gibes with human assessment. If a computer can
win Jeopardy, it can probably spit out a number for a student essay close to
that of a human grader.
2. On the other hand,
computers cannot
read.
3. No one who teaches writing, or values writing as
part of their courses believes that the numerical grade is the important
part of assessment. Ask anyone who’s taught more than a couple of semesters
of composition and they’ll tell you that they “know” an essay’s grade within
the first 30 seconds of reading. If that’s all I’m supposed to be doing, my
job just got a lot easier.
4. Meaning, quite obviously, what is important
about assessing writing is the response to the student author that allows
them reflect on their work and improve their approach in subsequent
drafts/future assignments.
5. I don’t know a single instructor of writing who
enjoys grading.
6. At the same time, the only way, and I mean the
only way to develop a relationship with one’s students is to read and
respond to their work. Automated grading is supposed to “free” the
instructor for other tasks, except there is no more important task. Grading
writing, while time consuming and occasionally unpleasant, is simply the
price of doing business.
7. The only motivations for even experimenting, let
alone embracing automated grading of student writing are business-related.
8. Since we know that Watson the computer is better
at Jeopardy than even its all-time greatest champions, why haven’t potential
contestants been “freed” to do other things like watch three competing
software algorithms answer Jeopardy questions asked by an animatronic Alex
Trebek?
9. Is it possible that there are some things we
should leave to people, rather than software? Do we remember that
“efficiency” and “productivity” are not actually human values? If essays
need to be graded, shouldn't we figure out ways for humans to do it?
10. There is maybe (emphasis on maybe) a limited
argument that this kind of software could be used to grade short answer
writing for exams in something like a history or literature course where key
words and concepts are most important in terms of assessing a “good” answer.
11. Except that if the written assessment is such
that it can be graded accurately by software, that’s probably not very good
assessment. If what’s important are the facts and key concepts, won’t
multiple-choice do?
12. The second most misguided statement in the
New York Times article covering the EdX
announcement is this from Anant Argawal, “There is a huge value in learning
with instant feedback. Students are telling us they learn much better with
instant feedback.” This statement is misguided because instant feedback
immediately followed by additional student attempts is actually antithetical
to everything we know about the writing process. Good writing is almost
always the product of reflection and revision. The feedback must be
processed, and only then can it be implemented. Writing is not a video game.
13. I’m thinking about video games, and how I learn
playing them. For a couple of years, I got very into Rock Band, a music
simulator. I was good, world ranked on multiple instruments if you must ask.
As one moves towards the higher difficulty songs, frustration sets in and
repeated attempts must be made to successfully “play” through one. I
remember trying no fewer than 75 times in a row, one attempt after the
other, to play the drum part for Rush’s
“YYZ,”
and each time, I was booed off the stage by my virtual
fans. My frustration level reached the point where I almost hurled the
entire Rock Band drums apparatus through my (closed) 2nd story window. After
that, fearing for my blood pressure and my sanity, I didn’t play Rock Band
at all for a couple of weeks. When I did, at last, return to the game, I
played “YYZ” through successfully on my first try. Even with video games,
time to process what we’ve learned helps.
14. The most misguided statement in the
Times article is from Daphne Koller, the
founder of Coursera: “It allows students to get immediate feedback on their
work, so that learning turns into a game, with students naturally
gravitating toward resubmitting the work until they get it right.”
15. I’m sorry, that’s not misguided, it’s just
silly.
16. Every semester, I introduce my students to the
diagram for a “rhetorical situation” an equilateral triangle with “writer,”
“subject,” and “reader,” each at one of the points. With automated grading,
I’ll have to change it to “writer,” “subject,” and “algorithm.”
17. What I’m saying is that writing to the
simulacrum is not the same thing as writing to a flesh and blood human
being. Software graded writing is like having intimate relations with a
RealDoll.
18. How is that not obvious?
19. That MIT and Stanford, two universities of high
esteem, are behind EdX and the automated grading nonsense, should cause
shame among their faculty, at least the ones that profess in the humanities.
20. I’ve wrestled over including that last one. It
seems possibly unfair, but I’m also thinking that it’s time to fight fire
with something as strong as fire, and the only weapon at my personal
disposal is indignation, righteous or otherwise. This is one of the
challenges of writing, thinking of audience and making choices. This choice
may anger some potential natural allies, but if those allies who must have a
front seat to this nonsense aren’t doing anything, they can hardly be
counted as allies.
21. I was encouraged by the reader responses in the
Times article. They run at least 10-1 against the idea of automated
grading of writing, and many of them are well-argued, and even respond to
arguments offered by other commenters. It’s an excellent model for academic
conversation.
22. The purpose of writing is to communicate with
an audience. In good conscience, we cannot ask students to write something
that will not be read. If we cross this threshold, we may as well simply
give up on education. I know that I won’t be involved. Let the software
“talk” to software. Leave me out of it.
Most GMAT critical reasoning questions contain hidden assumptions, and
learning how to recognize them is key
"GMAT Tip: Loaded Questions," Bloomberg Business Week, October
24, 2012 ---
http://www.businessweek.com/articles/2012-10-24/gmat-tip-loaded-questions
The GMAT Tip of the Week is a weekly
column that includes advice on taking the Graduate Management Admission
Test, which is required for admission to most business schools. Every week
an instructor from a top test-prep company will share suggestions for
improving your GMAT score. This week’s tip comes from Andrew Mitchell,
director of prebusiness programs and GMAT instructor at
Kaplan Test Prep.
As the U.S. presidential election continues, the
world around us teems with “arguments.” Arguments conveyed through TV ads,
debates, stump speeches, and newspaper editorials attempt to persuade us to
subscribe to a particular world view, vote for a certain candidate, even
donate money to a specific campaign. That’s what all arguments are: attempts
to convince. In real life, arguments make this attempt using a variety of
tactics, some more honorable than others. While some arguments are based on
solid evidence and reasoning, others rely on appeals to emotion or distorted
facts.
Fortunately for GMAT test takers, the arguments
found in questions that appear in the test’s Critical Reasoning section
follow a specific pattern. Keep these things in mind as you evaluate GMAT
arguments:
• All GMAT arguments contain evidence, which is
used to support a conclusion.
• On the GMAT, all evidence is accepted as true. No
exceptions, no “fact checkers.”
• All GMAT arguments are designed to contain a key
point of vulnerability: a gap between the evidence and the conclusion, which
must be bridged by an assumption.
• An assumption is defined as “something the author
doesn’t state but that must be true in order for the argument to hold.”
Finding the assumption is the key to Critical
Reasoning success. Questions can ask you to identify the central assumption,
point out a flaw in the argument (by showing why the assumption is
unreasonable), or recognize potential facts that would strengthen or weaken
the argument (by supporting or undermining the assumption, respectively).
Practice identifying assumptions as you listen to
the candidates’ arguments. Consider this one: “My administration would
create more jobs, since my policies will cut taxes on corporate profits.”
Continued in article
Jensen Comment
Note that the GMAT was among the first certification examinations to have
computers grade essay questions ---
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
Bob Jensen's threads on the CPA and CMA examinations are at
http://faculty.trinity.edu/rjensen/Bookbob1.htm#010303CPAExam
Bob Jensen's threads on careers are at
http://faculty.trinity.edu/rjensen/Bookbob1.htm#careers
Sociology professor designs SAGrader software for grading student essays
Student essays always seem to be riddled with the same
sorts of flaws. So sociology professor Ed Brent decided to hand the work off to
a computer. Students in Brent's Introduction to Sociology course at the
University of Missouri-Columbia now submit drafts through the SAGrader software
he designed. It counts the number of points he wanted his students to include
and analyzes how well concepts are explained. And within seconds, students have
a score. It used to be the students who looked for shortcuts, shopping for
papers online or pilfering parts of an assignment with a simple Google search.
Now, teachers and professors are realizing that they, too, can tap technology
for a facet of academia long reserved for a teacher alone with a red pen.
Software now scores everything from routine assignments in high school English
classes to an essay on the GMAT, the standardized test for business school
admission. (The essay section just added to the Scholastic Aptitude Test for the
college-bound is graded by humans). Though Brent and his two teaching assistants
still handle final papers and grades students are encouraged to use SAGrader for
a better shot at an "A."
"Computers Now Grading Students' Writing," ABC News, May 8, 2005 ---
http://abcnews.go.com/Technology/wireStory?id=737451
Jensen Comment: Aside from some of the obvious advantages such as grammar
checking, students should have a more difficult time protesting that the grading
is subjective and unfair in terms of the teacher's alleged favored versus
less-favored students. Actually computers have been used for some time in
grading essays, including the GMAT graduate admission test ---
http://www.yaledailynews.com/article.asp?AID=723
References to computer grading of essays ---
http://coeweb.fiu.edu/webassessment/references.htm
You can read about PEG at
http://snipurl.com/PEGgrade
MEDICAL- AND BUSINESS-SCHOOL ADMISSION TESTS WILL BE GIVEN BY COMPUTER
Applicants to medical and business schools will
soon be able to leave their No. 2 pencils at home. Both the Medical College
Admission Test and the Graduate Management Admission Test are ditching their
paper versions in favor of computer formats. The Association of American Medical
Colleges has signed a contract with Thomson Prometric, part of the Thomson
Corporation, to offer the computer-based version of the MCAT beginning in 2007.
The computerized version is being offered on a trial basis in a few locations
until then.The GMAT, which has been offered both on paper and by computer since
1997, will be offered only by computer starting in January, officials of the
Graduate Management Admission Council said. The test will be developed by ACT
Inc. and delivered by Pearson VUE, a part of Pearson Education Inc.The Law
School Admission Council has no immediate plans to change its test, which will
continue to be given on paper.
The Chronicle of Higher Education, August 5, 2005, Page A13
Jensen Comment: Candidates for the CPA are now allowed to only take
this examination via computer testing centers. The GMAT has been an
optional computer test since 1997. For years the GMAT has used
computerized grading of essay questions and was a pioneer in this regard.
"GMAT will replace an essay with sets of problems requiring different
forms of analysis. Will this fend off competition from the GRE?" by
Scott Jaschick, Inside Higher Ed, June 25, 2010 ---
http://www.insidehighered.com/news/2010/06/25/gmat
Jensen Comment
GMAT testing officials were among the first to adopt computer grading rather
than human grading of essays. I guess that will no longer be the case since the
essay will disappear on the GMAT. However, perhaps the GMAT will still have some
shorter essay questions.
http://faculty.trinity.edu/rjensen/assess.htm#ComputerBasedAssessment
"Facing a Robo-Grader? Just Keep Obfuscating Mellifluously," by
Michael Winerip, The New York Times, April 22, 2012 ---
http://www.nytimes.com/2012/04/23/education/robo-readers-used-to-grade-test-essays.html?pagewanted=all
A recently released study has concluded that
computers are capable of scoring essays on standardized tests as well as
human beings do.
Mark Shermis, dean of the
College of Education at the University of Akron, collected more than 16,000
middle school and high school test essays from six states that had been
graded by humans. He then used automated systems developed by nine companies
to score those essays.
Computer scoring produced
“virtually identical levels of accuracy, with the software in some cases
proving to be more reliable,” according to a University of Akron news
release.
“A Win for the Robo-Readers” is how an Inside
Higher Ed blog post summed things up.
For people with a weakness
for humans, there is more bad news. Graders working as quickly as they can —
the Pearson education company expects readers to spend no more than two to
three minutes per essay— might be capable of scoring 30 writing samples in
an hour.
The automated reader
developed by the
Educational Testing Service, e-Rater, can grade
16,000 essays in 20 seconds, according to David Williamson, a research
director for E.T.S., which develops and administers 50 million tests a year,
including the SAT.
Is this the end? Are Robo-Readers
destined to inherit the earth?
Les Perelman, a director of
writing at the Massachusetts Institute of Technology, says no.
Mr. Perelman enjoys studying
algorithms from E.T.S. research papers when he is not teaching
undergraduates. This has taught him to think like e-Rater.
While his research is
limited, because E.T.S. is the only organization that has permitted him to
test its product, he says the automated reader can be easily gamed, is
vulnerable to test prep, sets a very limited and rigid standard for what
good writing is, and will pressure teachers to dumb down writing
instruction.
The e-Rater’s biggest
problem, he says, is that it can’t identify truth. He tells students not to
waste time worrying about whether their facts are accurate, since pretty
much any fact will do as long as it is incorporated into a well-structured
sentence. “E-Rater doesn’t care if you say the War of 1812 started in 1945,”
he said.
Mr. Perelman found that
e-Rater prefers long essays. A 716-word
essay he wrote that
was padded with more than a dozen nonsensical sentences received a top score
of 6; a well-argued, well-written essay of 567 words was scored a 5.
An automated reader can
count, he said, so it can set parameters for the number of words in a good
sentence and the number of sentences in a good paragraph. “Once you
understand e-Rater’s biases,” he said, “it’s not hard to raise your test
score.”
E-Rater, he said, does not
like short sentences.
Or short paragraphs.
Or sentences that begin with
“or.” And sentences that start with “and.” Nor sentence fragments.
However, he said, e-Rater
likes connectors, like “however,” which serve as programming proxies for
complex thinking. Moreover, “moreover” is good, too.
Continued in article
"Professors Cede Grading Power to Outsiders—Even Computers: One
college gives the job to software, while another employs independent 'evaluators',"
by Jeffrey Young, Chronicle of Higher Education, August 7, 2011 ---
http://chronicle.com/article/Professors-Cede-Grading-Power/128528/
The best way to eliminate grade inflation is to
take professors out of the grading process: Replace them with professional
evaluators who never meet the students, and who don't worry that students
will punish harsh grades with poor reviews. That's the argument made by
leaders of Western Governors University, which has hired 300 adjunct
professors who do nothing but grade student work.
"They think like assessors, not professors," says
Diane Johnson, who is in charge of the university's cadre of graders. "The
evaluators have no contact with the students at all. They don't know them.
They don't know what color they are, what they look like, or where they
live. Because of that, there is no temptation to skew results in any way
other than to judge the students' work."
Western Governors is not the only institution
reassessing grading. A few others, including the University of Central
Florida, now outsource the scoring of some essay tests to computers. Their
software can grade essays thanks to improvements in artificial-intelligence
techniques. Software has no emotional biases, either, and one Florida
instructor says machines have proved more fair and balanced in grading than
humans have.
These efforts raise the question: What if
professors aren't that good at grading? What if the model of giving
instructors full control over grades is fundamentally flawed? As more
observers call for evidence of college value in an era of ever-rising
tuition costs, game-changing models like these are getting serious
consideration.
Professors do score poorly when it comes to fair
grading, according to a study published in July in the journal Teachers
College Record. After crunching the numbers on decades' worth of grade
reports from about 135 colleges, the researchers found that average grades
have risen for 30 years, and that A is now the most common grade given at
most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue
that a "consumer-based approach" to higher education has created subtle
incentives for professors to give higher marks than deserved. "The standard
practice of allowing professors free rein in grading has resulted in grades
that bear little relation to actual performance," the two professors
concluded.
Naturally, the standard grading model has plenty of
defenders, including some who argue that claims of grade inflation are
exaggerated—students could, after all, really be earning those higher
grades. The current system forges a nurturing relationship between
instructor and student and gives individualized attention that no robot or
stranger could give, this argument goes.
But the efforts at Western Governors and Central
Florida could change that relationship, and point to ways to pop any
grade-inflation bubble.
An Army of
Graders
To understand Western Governors' approach, it's
worth a reminder that the entire institution is an experiment that turns the
typical university structure on its head. Western Governors is entirely
online, for one thing. Technically it doesn't offer courses; instead it
provides mentors who help students prepare for a series of high-stakes
homework assignments. Those assignments are designed by a team of
professional test-makers to prove competence in various subject areas.
Continued in article
Jensen Comment
If grading is taken out of the hands of a teacher, one problem is that teacher
loses a big carrot/stick in motivating students to learn. There's greater
pressure from all sides that forces teachers to teach to the tests and other
aspects of the grading process that our out of her/his control.
For example, I motivated students to prepare for class by giving a quiz every
day. The quizzes motivated students because they were factored rather heavily
into final grades. If my quizzes no longer factored into final grades, my
students would have reduced incentives to prepare for each and every class.
They're more apt to wait until the last minute to cram for the examinations that
are not graded by me.
Be that as it may, I favor competency-based grading in this era of grade
inflation where teachers shivering in fear of student evaluations make courses
easier and easier ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ECA
Bob Jensen's threads on assessment in general ---
http://faculty.trinity.edu/rjensen/Assess.htm
Outsourcing the Grading of Papers
At Houston, business majors are now exposed to
Virtual-TA both as freshmen and as upperclassmen.
"Some Papers Are Uploaded to Bangalore to Be Graded," by Audrey Williams
June, Chronicle of Higher Education, April 4, 2010 ---
http://chronicle.com/article/Outsourced-Grading-With/64954/
Lori Whisenant knows that one way to improve the
writing skills of undergraduates is to make them write more. But as each
student in her course in business law and ethics at the University of
Houston began to crank out—often awkwardly—nearly 5,000 words a semester, it
became clear to her that what would really help them was consistent,
detailed feedback.
Her seven teaching assistants, some of whom did not
have much experience, couldn't deliver. Their workload was staggering: About
1,000 juniors and seniors enroll in the course each year. "Our graders were
great," she says, "but they were not experts in providing feedback."
That shortcoming led Ms. Whisenant, director of
business law and ethics studies at Houston, to a novel solution last fall.
She outsourced assignment grading to a company whose employees are mostly in
Asia.
Virtual-TA, a service of a company called EduMetry
Inc., took over. The goal of the service is to relieve professors and
teaching assistants of a traditional and sometimes tiresome task—and even,
the company says, to do it better than TA's can.
The graders working for EduMetry, based in a
Virginia suburb of Washington, are concentrated in India, Singapore, and
Malaysia, along with some in the United States and elsewhere. They do their
work online and communicate with professors via e-mail. The company
advertises that its graders hold advanced degrees and can quickly turn
around assignments with sophisticated commentary, because they are not
juggling their own course work, too.
The company argues that professors freed from
grading papers can spend more time teaching and doing research.
"We tend to drop the ball when it comes to giving
rich feedback, and in the end this hurts the student," says Chandru Rajam,
who has been a business professor at several universities. "I just thought,
"'There's got to be a better way.'" He helped found the privately held
EduMetry five years ago and remains on its management staff.
Whether Virtual-TA is that better way remains to be
seen. Company officials would not say how many colleges use the service, but
Mr. Rajam acknowledges that the concept of anonymous and offshore grading is
often difficult for colleges to swallow.
Those that have signed up are a mix of for-profit
and nonprofit institutions, many of them business schools, both in the
United States and overseas. Professors and administrators say they have been
won over by on-the-job performance. "This is what they do for a living,"
says Ms. Whisenant. "We're working with professionals."
Anonymous Expertise Virtual-TA's tag line is "Your
expert teaching assistants." These graders, also called assessors, have at
least master's degrees, the company says, and must pass a writing test,
since conveying their thoughts on assignments is an integral part of the
job. The company declined to provide The Chronicle with names or degrees of
assessors. Mr. Rajam says that the company's focus is on "the process, not
the individual," and that professors and institutions have ample opportunity
to test the assessors' performance during a trial period, "because the proof
is in the pudding."
Assessors are trained in the use of rubrics, or
systematic guidelines for evaluating student work, and before they are hired
are given sample student assignments to see "how they perform on those,"
says Ravindra Singh Bangari, EduMetry's vice president of assessment
services.
Mr. Bangari, who is based in Bangalore, India,
oversees a group of assessors who work from their homes. He says his job is
to see that the graders, many of them women with children who are eager to
do part-time work, provide results that meet each client's standards and
help students improve.
"Training goes on all the time," says Mr. Bangari,
whose employees work mostly on assignments from business schools. "We are in
constant communication with U.S. faculty."
Such communication, part of a multi-step process,
begins early on. Before the work comes rolling in, the assessors receive the
rubrics that professors provide, along with syllabi and textbooks. In some
instances, the graders will assess a few initial assignments and return them
for the professor's approval.
Sometimes professors want changes in the nature of
the comments. Ms. Whisenant found those on her students' papers initially
"way too formal," she says. "We wanted our feedback to be conversational and
more direct. So we sent them examples of how we wanted it done, and they did
it."
Professors give final grades to assignments, but
the assessors score the papers based on the elements in the rubric and "help
students understand where their strengths and weaknesses are," says Tara
Sherman, vice president of client services at EduMetry. "Then the professors
can give the students the help they need based on the feedback."
Mr. Bangari says that colleges use Virtual-TA's
feedback differently, but that he has seen students' work improve the most
when professors have returned assignments to students and asked them to redo
the work to incorporate the feedback.
The assessors use technology that allows them to
embed comments in each document; professors can review the results (and edit
them if they choose) before passing assignments back to students. In
addition, professors receive a summary of comments from each assignment,
designed to show common "trouble spots" among students' answers, among other
things. The assessors have no contact with students, and the assignments
they grade are stripped of identifying information. Ms. Sherman says most
papers are returned in three or four days, which can be key when it comes to
how students learn. "You can reinforce certain ideas based on timely
feedback," Mr. Rajam says. "Two or three weeks after an assignment is too
long."
No Classroom Insight Critics of outsourced grading,
however, say the lack of a personal relationship is a problem.
"An outside grader has no insight into how
classroom discussion may have played into what a student wrote in their
paper," says Marilyn Valentino, chair of the board of the Conference on
College Composition and Communication and a veteran professor of English at
Lorain County Community College. "Are they able to say, 'Oh, I understand
where that came from' or 'I understand why they thought that, because Mary
said that in class'?"
Ms. Valentino also questions whether the money
spent on outsourced graders could be better used to help pay for more
classroom instructors.
Professors and on-site teaching assistants, she
says, are better positioned to learn enough about individual students to
adjust their tone to help each one get his or her ideas across on paper.
"Sometimes kidding them works, sometimes being strict and straightforward
works," Ms. Valentino says. "You have to figure out how to get in that
student's mind and motivate them."
Some professors "could be tempted to not even read"
the reports about how students responded to various parts of an assignment,
she says, because when "someone else is taking care of the grading," that
kind of information can become easier to ignore.
Terri Friel, dean of the business school at
Roosevelt University, says such worries are common but overstated. In her
former post as associate dean of administration at Butler University's
business school, she hired EduMetry to help the business school gather
assessment data it needed for accreditation — another service the company
offers. But Ms. Friel believed that Virtual-TA would not appeal to
professors there.
"Faculty have this opinion that grading is their
job, ... but then they'll turn right around and give papers to graduate
teaching assistants," Ms. Friel says. "What's the difference in grading work
online and grading it online from India? India has become known as a very
good place to get a good business education, and why not make use of that
capability?"
Acceptance has been a little easier at West Hills
Community College, in Coalinga, Calif., which turned to Virtual-TA to help
some students in its online classes get more feedback than instructors for
such classes have typically offered. The service is used for one section
each of three online courses—criminal justice, sociology, and basic math.
Instructors can use it for three to five assignments of their choice per
student. Using Virtual-TA for every assignment would be too costly, says
Susan Whitener, associate vice chancellor for educational planning. (The
price varies by length and complexity, but Virtual-TA suggests to potential
clients that each graded assignment will cost $12 per student. That means
outsourcing the grading of six assignments for 20 students in a course would
cost $1,440.)
But West Hills' investment, which it wouldn't
disclose, has paid off in an unexpected way. The feedback from Virtual-TA
seems to make the difference between a student's remaining in an online
course and dropping out.
"We definitely have a cost-benefit ratio that's
completely in our favor for us to do this," Ms. Whitener says.
Holly Suarez, an online instructor of sociology at
West Hills, says retention in her class has improved since she first used
Virtual-TA, two years ago, on weekly writing assignments. Before then, "I
would probably lose half of my students," says Ms. Suarez, who typically
teaches 50 students per class.
Because Virtual-TA provides detailed comments about
grammar, organization, and other writing errors in the papers, students have
a framework for improvement that some instructors may not be able to
provide, she says.
And although Ms. Suarez initially was wary of
Virtual-TA—"I thought I was being replaced"—she can now see its advantages,
she says. "Students are getting expert advice on how to write better, and I
get the chance to really focus on instruction."
At Houston, business majors are now exposed to
Virtual-TA both as freshmen and as upperclassmen.
Continued in article
Computer Grading of Essays ---
http://faculty.trinity.edu/rjensen/assess.htm#Essays
For years essay questions have been computer graded for the GMAT examination
Sociology professor designs SAGrader software for grading student essays
Assessment in General
Assessment/Learning
Issues:
Measurement and the No-Significant Differences
Assessment of new technology in learning is impossible to formally evaluate
with both rigor and practicality. The main problem is the constantly changing
technology. By the time assessment research is made available, the underlying
technologies may have been improved to a point where the findings are no longer
relevant under the technologies existing at the time of the research. What
can be done for students after my university installed a campus-wide network is
vastly different than the before-network days. A classroom failure using last
year's technology may not be appropriate to compare with a similar effort using
newer technology. For example, early LCD panel projections from computers in
classrooms were awful in the early 1990s. In the beginning, LCD
panels had no color and had to be used in virtually dark classrooms. This was a
bad experience for most students and instructors (including me). Then new
technology in active matrix LCD panels led to color but the classrooms still had
to be dark. Shortly thereafter, new technologies in overhead projection
brightness allowed for more lighting in classrooms while using LCD panels.
However, many classrooms are not yet equipped with light varying controls to
optimally set lighting levels. Newer trends with even better three-beam
projectors and LCD data projectors changed everything for electronic classrooms,
because now classrooms can have normal lighting as long as lights are not aimed
directly at the screen. The point here is that early experiences with the first
LCD panel technology are no longer relevant in situations where the latest
projection technology, especially in fully equipped electronic classrooms, is
available. Unfortunately, there is a tendency among some faculty to be so
discouraged by one or two failed attempts that they abandon future efforts using
newer technologies.
One of the most creative attempts to evaluate effectiveness from a Total
Quality Management (TQM) perspective is reported by Prabhu and Ramarapu (1994).
This is an attempt to measure learning using a TQM database that can be used to
compare alternative teaching methods or entire programs. [Prabhu, S.S. and
N.K. Ramarapu (1994). “A prototype database to monitor course effectiveness: A
TQM approach,” T H E Technological Horizons in Education, October,
99-103.]
It is easy to become discouraged with first efforts using older technologies.
Many faculty and students became highly frustrated with the early complexities
of using the Internet and/or campus networks that were not user friendly. Unless
they took the time and trouble to become well versed in UNIX programming and
became experienced hackers, the Internet turned into a totally discouraging
nightmare. Now with the WWW and many other user-friendly innovations in campus
and international networking, the need to become an experienced hacker is vastly
reduced.
"Are College Lectures Unfair?" by Annie
Murphy Paul, The New York Times, September 12. 2015 ---
http://www.nytimes.com/2015/09/13/opinion/sunday/are-college-lectures-unfair.html
. . .
Research comparing the two methods has consistently
found that students over all perform better in active-learning courses than
in traditional lecture courses. However, women, minorities, and low-income
and first-generation students benefit more, on average, than white males
from more affluent, educated families.
There are several possible reasons. One is that
poor and minority students are disproportionately likely to have attended
low-performing schools and to have missed out on the rich academic and
extracurricular offerings familiar to their wealthier white classmates, thus
arriving on campus with less background knowledge. This is a problem, since
research has demonstrated that we learn new material by anchoring it to
knowledge we already possess. The same lecture, given by the same professor
in the same lecture hall, is actually not the same for each student
listening; students with more background knowledge will be better able to
absorb and retain what they hear.
Active-learning courses deliberately structure
in-class and out-of-class assignments to ensure that students repeatedly
engage with the material. The instructors may pose questions about the
week’s reading, for example, and require students to answer the questions
online, for a grade, before coming to class. This was the case in an
introductory biology course taught by Kelly A. Hogan at the University of
North Carolina at Chapel Hill. In a study conducted with Sarah L. Eddy of
the University of Washington, the researchers compared this “moderate
structure” course (which included ungraded guided-reading questions and
in-class active-learning exercises in addition to the graded online
assignments) to the same course taught in a “low structure” lecture format.
In the structured course, all demographic groups
reported completing the readings more frequently and spending more time
studying; all groups also achieved higher final grades than did students in
the lecture course. At the same time, the active-learning approach worked
disproportionately well for black students — halving the black-white
achievement gap evident in the lecture course — and for first-generation
college students, closing the gap between them and students from families
with a history of college attendance.
Other active-learning courses administer frequent
quizzes that oblige students to retrieve knowledge from memory rather than
passively read it over in a textbook. Such quizzes have been shown to
improve retention of factual material among all kinds of students.
At the University of Texas at Austin, the
psychology professors James W. Pennebaker and Samuel D. Gosling instituted a
low-stakes quiz at the start of each meeting of their introductory
psychology course. Compared with students who took the same course in a more
traditional format, the quizzed students attended class more often and
achieved higher test scores; the intervention also reduced by 50 percent the
achievement gap between more affluent and less affluent students.
Continued in article
Jensen Comment
My initial reaction is that Ms. Murphy over generalizes the superiority of
active learning over the lecture pedagogy for all kinds of students. In truth
among good students pedagogy differences are consistently shown not to matter in
terms of student performance --- See below!
This is consistent with the theory that top students adjust to whatever it takes
to be top performers in spite of their gender, racial, and other demographic
differences. Otherwise they would not be considered top students.
Where pedagogy possibly matters is with
introductory students who have not yet proven their abilities and students who
are not good students. However, here Ms. Murphy makes an error in implying that
the ledture pedagogy and the active learning pedagogy are uniquely defined.
There are in fact gray zones at the edges. For example, one of the best teachers
I've ever known used the lecture pedagogy in in classes of approximately 35
students. However, she went to extremes in engaging her students in learning
experience in her office where she was available around 40 hours per week to
help students individually and in small groups. In her case, there was no need
to flip a classroom to improve performance of virtually any one of her students.
Alice Nichols is now retired but she could draw more out of her intermediate
accounting students than any teacher I've ever known.
Ms. Murphy does not define active
learning, and I would argue that there are too many types of active learning
that undermine her overly generalized conclusions. For example, team learning is
often an active pedagogy where it is quite common to reward free riders on a
team with high participation scores or to penalize a good team learner
surrounded by teammates of questionable value. There are of course other types
of active learning such as case method teaching and science laboratories. In
both instances these types of active learning approaches tend to work better for
highly motivated students. These are usually not so great for poor students,
especially poorly motivated students. This is why case method teaching generally
works better in graduate courses than in first-year undergraduate courses.
In any case, I think Ms. Murphy has
extrapolated far too much from her questionable evidence.
"Are Your Students Learning From Their Mistakes?" by David Goobler,
Chronicle of Higher Education, June 1, 2016 ---
https://chroniclevitae.com/news/1421-are-your-students-learning-from-their-mistakes?cid=at&utm_source=at&utm_medium=en&elqTrackId=d5b397c2094347e1b0e824611a75a491&elq=1158b22a0ab54272a738491e2c6538ab&elqaid=9288&elqat=1&elqCampaignId=3251
Jensen Comment
If instructors are not giving mostly A grades
in a course large-scale empirical studies show that students adapt to
what counts most for grades. For example, most of them will dig in there
heels and do whatever it takes at critical points in the grading
process. This is widely known as the "no-significant-difference"
phenomenon.
http://faculty.trinity.edu/rjensen/assess.htm#AssessmentIssues
When given second chances a common strategy is
to wing it the first time and check the grade. If the grade is low
students dig in like they should have the first time. One huge problem
with second chances is that this policy contributes to the biggest
scandal in education in recent years --- grade inflation where the
median grade across the North America tends to be A-.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Of course the main cause of grade inflation is
having teacher evaluations affect performance evaluations and tenure.
Second chance teachers most likely get higher teacher evaluations.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
If second-chance teachers are giving mostly A
grades something is wrong with academic standards.
"Study: Little Difference in Learning in Online and In-Class Science
Courses," Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses
A
study in Colorado has found little difference in
the learning of students in online or in-person introductory science
courses. The study tracked community college students who took science
courses online and in traditional classes, and who then went on to four-year
universities in the state. Upon transferring, the students in the two groups
performed equally well. Some science faculty members have expressed
skepticism about the ability of online students in science, due to the lack
of group laboratory opportunities, but the programs in Colorado work with
companies to provide home kits so that online students can have a lab
experience.
Jensen Comment
Firstly, note that online courses are not necessarily mass education (MOOC)
styled courses. The student-student and student-faculty interactions can be
greater online than onsite. For example, my daughter's introductory chemistry
class at the University of Texas had over 600 students. On the date of the final
examination he'd never met her and had zero control over her final grade. On the
other hand, her microbiology instructor in a graduate course at the University
of Maine became her husband over 20 years ago.
Another factor is networking. For example, Harvard Business School students
meeting face-to-face in courses bond in life-long networks that may be stronger
than for students who've never established networks via classes, dining halls,
volley ball games, softball games, rowing on the Charles River, etc. There's
more to lerning than is typically tested in competency examinations.
My point is that there are many externalities to both onsite and online
learning. And concluding that there's "little difference in learning" depends
upon what you mean by learning. The SCALE experiments at the University of
Illinois found that students having the same instructor tended to do slightly
better than onsite students. This is partly because there are fewer logistical
time wasters in online learning. The effect becomes larger for off-campus
students where commuting time (as in Mexico City) can take hours going to and
from campus.
http://faculty.trinity.edu/rjensen/255wp.htm
"When Coaching and Testing Collide," by Lee S. Shulman, Carnegie
Perspectives," May 2008 ---
http://www.carnegiefoundation.org/perspectives/sub.asp?key=245&subkey=2598
It's a scene we have watched dozens of times in the
movies. A young man or woman of modest talent tries out for the baseball or
football or basketball team under the tutelage of a gruff, demanding coach
who expresses initial doubts about the likelihood that the kid will prove
himself or herself worthy of a spot on the team. The coach is tough and
persistent, setting high standards and then mercilessly driving all his
charges to meet them. In the climactic scene at the season's end, the good
guys or gals are losing by several baskets, or runs, or a
touchdown—depending on the sport. "Send me in, coach," pleads our young
hero/ine, which coach reluctantly does. The kid scores the winning points,
and the team wins. The coach turns out to have a heart of gold, and the
reasons for his seeming cruelty become apparent.
What exactly is it that the coach provides the
aspirant? Let me propose five processes associated with both the coach and
mentor roles: 1) technique, learned through endless drill; 2) strategy, that
allows the person who is coached to become capable of a conception of the
work that will turn out to be pivotal in their eventual victory; 3)
motivation, which produces a "Rocky-like" level of commitment that will help
them exceed their own and others' expectations; 4) vision, where players
come together in a new vision of the process and their capabilities for
success; and 5) identity, whereby the protagonist not only wins, but is
transformed, with an internalized new sense of self.
In sports there is always a clear line between the
coaching situation and the performance context. When the final jump shot is
made from the three-point line by the basketball player, the coach can't
jump onto the court and give the ball the extra momentum or spin it might
need. I prefer to call such typical relationships between a coach/mentor and
player/protégé examples of unmediated mentoring. No separate product comes
in the middle between the coaching and the performing that renders the
relative contributions of the coach and the coached inherently ambiguous
because the entire performance is visible and is itself the basis for
evaluating success or failure.
There is, however, an entire genre of mediated
mentoring. The performance is not directly observed and has yielded a
product which is the focal point of competition and evaluation. Thus in the
case of mediated performances, the respective roles of coach and performer
are inherently invisible. Although the five processes are in place and just
as transformative, there is inherently no way to discern how much of the
work was done independently by the candidate, by peers or by advisors.
Whenever mentoring is mediated by a product whose
actual authoring processes are not directly observable, as is the case with
literature, objects of architectural or mechanical design, scholarly
publications, doctoral dissertations, and even paintings, assessment of
individual competence is problematic. But are these problems of educational
measurement or a new set of realities regarding the conditions of expert
performance? Stanford education professor Sam Wineburg and others point out
that the crux of the problem may not be measurement error but rather the
inherently social and interactive character of the performances whose
competence is assessed. Writing is and should be critiqued and edited, as
should painting, the designs for buildings and the research performed in
scientific laboratories. To avoid mentoring merely to ensure the legitimacy
of individual test scores might even be judged a form of malpractice! So we
are faced with an essential tension between the inherently social character
of most forms of complex human performance and the psychometric imperative
to estimate a "true score" for ability or any other personal trait using the
individual as the unit of analysis.
In an education setting, the distinction between
the scores that a student earns on any test-like event—multiple choice test,
essay exam, portfolio or senior sermon in a seminary—and their underlying
"true" capability is a reflection of the distinction, borrowed perhaps from
the field of linguistics, between competence and performance. Psychometrics
rests on the claim that the observed performance is a valid indicator if it
tracks the underlying competence faithfully. But what if mentored or coached
performances actually track underlying competence more validly than
measurement of students working alone? What if the composition written by a
student in the presence of his editing team is a better indicator of his
future writing competence than having him write alone?
That is what sits at the heart of the puzzle.
My proposal for "getting over" this essential
tension is three-fold: making changes in the processes of assessment, making
explicit the parameters of mentoring, and developing a clear code of ethical
principles for both assessment and mentoring. At the heart of these
proposals is the principle of transparency. Everything possible must be done
to ensure that the roles of mentors, peers and students be transparently
clear in any mediated mentoring activity. There should be ways of reporting
on the character of coaching for test performance that make the efforts of
the coach entirely transparent to assessment.
I have often written that collaboration is a
marriage of insufficiencies; that students can work together in ways that
scaffold and support each others' learning, and in ways that support each
others' knowledge. Now I call for a marriage of sufficiencies to overcome
the essential tensions between individual work and collaborative
performance, coaching support and independent assessment, the mentor as an
agent of zealous advocacy and the mentor as a steward of the commons.
As Dewey observed, we will not solve this problem,
we will get over it. It is built in to the psychometric paradox: Our
measurement models are psychometric but our assessment needs are often
sociometric, requiring the measurement of socially scaffolded and joint
productions.
Carnegie Perspectives ---
http://www.carnegiefoundation.org/perspectives/
From The Wall Street
Journal Accounting Weekly Review on November 17, 2006
TITLE: Colleges, Accreditors Seek Better Ways to Measure Learning
REPORTER: Daniel Golden
DATE: Nov 13, 2006 PAGE: B1
LINK:
http://online.wsj.com/article/SB116338508743121260.html?mod=djem_jiewr_ac
TOPICS: Accounting
SUMMARY: The article discusses college- or university-wide
accreditation by regional accreditation bodies and reaction to the
Spellings Commission report. Questions extend the accreditation
discussion to AACSB accreditation.
QUESTIONS:
1.) What is accreditation? The article describes university-wide
accreditation by regional accrediting bodies. Why is this step
necessary?
2.) Does your business school have accreditation by Association to
Advance Collegiate Schools of Business (AACSB)? How does this
accreditation differ from university-wide accreditation?
3.) Why are regional accrediting agencies planning to meet with
Secretary Spellings?
4.) Did you consider accreditation in deciding where to go to college
or university? Why or why not?
5.) Do you think improvements in assessing student learning are
important, as the Spellings Commission argues and accreditors are now
touting? Support your answer.
SMALL GROUP ASSIGNMENT: Find out about your college or university's
accreditation. When was the last accreditation review? Were there any
concerns expressed by the accreditors? How has the university responded
to any concerns expressed?
Once these data are gathered, discuss in class in groups:
Has this information been easy or difficult to find? Do you agree
with the assessment of concerns about the institution and/or the
university's responses?
Reviewed By: Judy Beckman, University of Rhode Island
TITLE: Colleges, Accreditors Seek Better Ways to Measure Learning
REPORTER: Daniel Golden
DATE: Nov 13, 2006 PAGE: B1
LINK:
http://online.wsj.com/article/SB116338508743121260.html?mod=djem_jiewr_ac
At the University of the South, a
highly regarded liberal-arts college in Sewanee, Tenn., the dozen
professors who teach the required freshman Shakespeare course design
their classes differently, assigning their favorite plays and
writing and grading their own exams.
But starting next fall, one question
on the final exam will be the same across all of the classes, and
instructors won't grade their own students' answers to that
question. Instead, to assure more objective evaluation, the
professors will trade exams and grade each other's students.
The English department adopted this
change -- despite faculty grumbling about losing some classroom
independence -- under pressure from the Southern Association of
Schools and Colleges. The association, one of the six regional
groups that accredit nearly 3,000 U.S. colleges, told the University
of the South that, to have its accreditation renewed, it would have
to do a better job of measuring student learning. Without such
accreditation, the school's students wouldn't qualify for federal
financial aid.
The shift "does cut into the
individual faculty member's autonomy, and that's disturbing," says
Jennifer Michael, an associate professor. "On the other hand, it's
making us think about how do we figure out what students are
actually learning. Maybe having them take and pass a course doesn't
mean they've learned everything we think they have."
Regional accreditors used to limit
their examinations to colleges' financial solvency and educational
resources, with the result that well-established schools enjoyed
rubber-stamp approval. But now they are increasingly holding
colleges, prestigious or not, responsible for undergraduates' grasp
of such skills as writing and critical thinking. And prodded by
regional accreditors, colleges are adopting various means of
assessing learning in addition to classroom grades, from electronic
portfolios that collect a student's work from different courses to
standardized testing and special projects for graduating seniors.
The accreditors aren't moving fast
enough for the Bush administration, though. In the wake of a
federally sponsored study published in 2005 that showed declining
literacy among college-educated Americans, Secretary of Education
Margaret Spellings and a commission she appointed on the future of
higher education want colleges to be more accountable for -- and
candid about -- student performance, and they have criticized
accreditors as barriers to reform.
Congress sets the standards for
accreditors, and the Education Department periodically reviews
compliance with those standards. Congress identified "success with
respect to student achievement" as a requirement for accreditation
in 1992, and then in 1998 made it the top priority. That imperative,
along with the advent of online education, has spurred accreditors
to rethink their longtime emphasis on such criteria as the number of
faculty members with doctorates. Since 2000, several regional
accreditors have revamped their rules to emphasize student learning.
"Accreditors have moved the ball
forward," says Kati Haycock, a member of the Spellings commission
and the director of the nonprofit Education Trust in Washington,
D.C., which seeks better schooling for disadvantaged students. "Not
far enough, not fast enough, but they have moved the ball forward."
An issue paper written for the
commission by Robert Dickeson, a former president of the University
of Northern Colorado, complained that accreditation "currently
settles for meeting minimum standards," and it called for replacing
regional accreditors with a new national foundation. "Technology has
rendered the quaint jurisdictional approach to accreditation
obsolete," Mr. Dickeson wrote.
The commission didn't endorse that
recommendation, but its final report last month cited "significant
shortcomings" in accreditation and called for "transformation" of
the process. In a Sept. 22 speech marking the release of the report,
Secretary Spellings said that accreditors are "largely focused on
inputs, more on how many books are in a college library than whether
students can actually understand them....That must change."
David Ward, a commission member and
the president of the American Council on Education, a higher
education advocacy group, declined to sign the report, in part
because he objected to its criticism of accreditors as overly
simplistic.
Russell Edgerton, president emeritus
of the American Association for Higher Education, says "there's no
question that American colleges are underachieving," but he argues
that accreditors are rising to the challenge. "Ten years ago, I
would have said that regional accreditors are dead in the water and
asleep at the wheel," he says. But "there's been a kind of
renaissance within accreditation agencies in the past five to six
years. They're helping institutions create a culture of evidence
about student learning."
Mr. Edgerton also thinks the federal
government's emphasis on new accountability measures is flawed
because it bypasses the judgment of traditional arbiters like
faculty and accreditors. "The danger is that the standardized
testing approach in K-12 would slop over into higher education," he
says. "Higher ed is different."
Jerome Walker, associate provost and
accreditation liaison officer for the University of Southern
California, agrees that the administration's attacks on accreditors
are unfair. The Western Association of Schools and Colleges, which
accredits USC, "has been extremely sensitive" to student learning,
he says.
According to the Western Association's
executive director, Ralph Wolff, the group revamped its standards in
2001 to require colleges to identify preparation needed by entering
freshmen and the expectations for student progress in critical
thinking, quantitative reasoning and other skills. Its accreditation
process now takes four years, up from 1½, and it features a
detailed, peer-reviewed proposal for improvement and two site
visits, including one devoted to "educational effectiveness."
Historically, research universities
like USC "used to blow off" accreditation, Mr. Wolff says. "Now this
has become a real challenge for them in a good way."
Encouraged by Mr. Wolff, USC last year
assigned the same two essay questions -- one about conformity,
another based on a quotation from ethicist Robert Bellah -- to
freshmen in a beginning writing course and juniors and seniors in an
advanced course. A group of faculty then evaluated the essays
without knowing the students' names or which course they were
taking. The reassuring outcome, according to Richard Fliegel,
assistant dean for academic programs, was that juniors and seniors
"demonstrated significantly more critical thinking skills" than
freshmen, and that advanced students who had taken the first-year
course outperformed transfer students who hadn't taken beginning
writing at USC.
Because the writing initiative is
tailored to USC's curriculum, the results -- while helpful to
administrators and accreditors -- wouldn't necessarily help the
public compare USC to other schools. That is a big drawback as far
as the Bush administration is concerned. "I have two kids in college
now," says Vickie Schray, deputy director of the Spellings
commission. "It's a huge expense. Yet there's very little
information on return of investment or ability to shop around for
the greatest value."
She adds, though, that it is a
"misconception" to think that the administration wants to have "one
standardized test for all institutions" or to extend the testing
requirements of the "No Child Left Behind" law for K-12 schools to
higher education.
Even so, one standardized test of
critical thinking, the Collegiate Learning Assessment, is becoming
popular. It adjusts for students' scores on the SAT and ACT
college-entrance exams, potentially allowing more meaningful
comparisons of the value added by colleges. The number of schools
using the assessment has soared from 54 two years ago to 170 this
year. Among those using the test this fall: the University of Texas
at Austin, Duke University, Arizona State University and Washington
and Lee University.
Roger Benjamin, president of the
nonprofit Council for Aid to Education, which sponsors the test,
says state officials and university administrators have been the
principal forces behind its increasing use. "Accreditors are coming
to the party, but a bit late," Mr. Benjamin says.
Meanwhile, Secretary Spellings plans
to meet with accreditors in late November to discuss how to
"accelerate the focus on student achievement," Ms. Schray says.
Accreditors say they welcome the opportunity to tout their progress.
"We have made a lot of reforms," says the Western Association's Mr.
Wolff. "We'd like to bring the secretary up-to-date on the
significance of these reforms and the impact they're already having
on institutions."
|
As David Bartholomae observes, “We make a huge
mistake if we don’t try to articulate more publicly what it is we value in
intellectual work. We do this routinely for our students — so it should not be
difficult to find the language we need to speak to parents and legislators.” If
we do not try to find that public language but argue instead that we are not
accountable to those parents and legislators, we will only confirm what our
cynical detractors say about us, that our real aim is to keep the secrets of our
intellectual club to ourselves. By asking us to spell out those secrets and
measuring our success in opening them to all, outcomes assessment helps make
democratic education a reality.
Gerald Graff, "Assessment Changes
Everything," Inside Higher Ed, February 21, 2008 ---
http://www.insidehighered.com/views/2008/02/21/graff
Gerald Graff is professor of English at the University of Illinois at Chicago
and president of the Modern Language Association. This essay is adapted from a
paper he delivered in December at the MLA annual meeting, a version of which
appears on the MLA’s Web site and is reproduced here with the association’s
permission. Among Graff’s books are Professing Literature, Beyond the
Culture Wars and Clueless in Academe: How School Obscures the Life of the Mind.
The consensus report, which was approved by the
group’s international board of directors, asserts that it is vital when
accrediting institutions to assess the “impact” of faculty members’ research on
actual practices in the business world.
"Measuring ‘Impact’ of B-School Research," by Andy Guess, Inside
Higher Ed, February 21, 2008 ---
http://www.insidehighered.com/news/2008/02/22/impact
Ask anyone with an M.B.A.: Business school provides
an ideal environment to network, learn management principles and gain access
to jobs. Professors there use a mix of scholarly expertise and business
experience to teach theory and practice, while students prepare for the life
of industry: A simple formula that serves the school, the students and the
corporations that recruit them.
Yet like
any other academic enterprise, business schools expect their
faculty to produce peer-reviewed research. The relevance,
purpose and merit of that research has been debated almost
since the institutions started appearing, and now a new
report promises to add to the discussion — and possibly stir
more debate. The Association to Advance Collegiate Schools
of Business on Thursday released the final report of its
Impact of Research Task Force, the
result of feedback from almost 1,000 deans, directors and
professors to a preliminary draft circulated in August.
The consensus
report, which was approved by the group’s international
board of directors, asserts that it is vital when
accrediting institutions to assess the “impact” of faculty
members’ research on actual practices in the business world.
But it does not settle on concrete metrics for impact,
leaving that discussion to a future implementation task
force, and emphasizes that a “one size fits all” approach
will not work in measuring the value of scholars’ work.
The report
does offer suggestions for potential measures of impact. For
a researcher studying how to improve manufacturing
practices, impact could be measured by counting the number
of firms adopting the new approach. For a professor who
writes a book about finance for a popular audience, one
measure could be the number of copies sold or the quality of
reviews in newspapers and magazines.
“In the
past, there was a tendency I think to look at the
[traditional academic] model as kind of the desired
situation for all business schools, and what we’re saying
here in this report is that there is not a one-size-fits-all
model in this business; you should have impact and
expectations dependent on the mission of the business school
and the university,” said Richard Cosier, the dean of the
Krannert School of Management at Purdue University and vice
chair and chair-elect of AACSB’s board. “It’s a pretty
radical position, if you know this business we’re in.”
That
position worried some respondents to the initial draft, who
feared an undue emphasis on immediate, visible impact of
research on business practices — essentially, clear
utilitarian value — over basic research. The final report
takes pains to alleviate those concerns, reassuring deans
and scholars that it wasn’t minimizing the contributions of
theoretical work or requiring that all professors at a
particular school demonstrate “impact” for the institution
to be accredited.
“Many
readers, for instance, inferred that the Task Force believes
that ALL intellectual contributions must be relevant to and
impact practice to be valued. The position of the Task Force
is that intellectual contributions in the form of basic
theoretical research can and have been extremely valuable
even if not intended to directly impact practice,” the
report states.
“It also is
important to clarify that the recommendations would not
require every faculty member to demonstrate impact from
research in order to be academically qualified for AACSB
accreditation review. While Recommendation #1 suggests that
AACSB examine a school’s portfolio of intellectual
contributions based on impact measures, it does not specify
minimum requirements for the maintenance of individual
academic qualification. In fact, the Task Force reminds us
that to demonstrate faculty currency, the current standards
allow for a breadth of other scholarly activities, many of
which may not result in intellectual contributions.”
Cosier, who
was on the task force that produced the report, noted that
business schools with different missions might require
differing definitions of impact. For example, a traditional
Ph.D.-granting institution would focus on peer-reviewed
research in academic journals that explores theoretical
questions and management concepts. An undergraduate
institution more geared toward classroom teaching, on the
other hand, might be better served by a definition of impact
that evaluated research on pedagogical concerns and learning
methods, he suggested.
A further
concern, he added, is that there simply aren’t enough
Ph.D.-trained junior faculty coming down the pipeline, let
alone resources to support them, to justify a single
research-oriented model across the board. “Theoretically,
I’d say there’s probably not a limit” to the amount of
academic business research that could be produced, “but
practically there is a limit,” Cosier said.
But
some critics have worried that the
report could encourage a focus on the immediate impact of
research at the expense of theoretical work that could
potentially have an unexpected payoff in the future.
Historically, as the report notes, business scholarship was
viewed as inferior to that in other fields, but it has
gained esteem among colleagues over the past 50 or so years.
In that context, the AACSB has pursued a concerted effort to
define and promote the role of research in business schools.
The report’s concrete recommendations also include an awards
program for “high-impact” research and the promotion of
links between faculty members and managers who put some of
their research to use in practice.
The
recommendations still have a ways to go before they become
policy, however. An implementation task force is planned to
look at how to turn the report into a set of workable
policies, with some especially worried about how the
“impact” measures would be codified. The idea, Cosier said,
was to pilot some of the ideas in limited contexts before
rolling them out on a wider basis.
Jensen Comment
It will almost be a joke to watch leading accountics researchers trying of show
how their esoteric findings have impacted the practice world when the professors
themselves cannot to point to any independent replications of their own work ---
http://faculty.trinity.edu/rjensen/Theory01.htm#Replication
Is the practice world so naive as to rely upon findings of scientific research
that has not been replicated?
Those conclusions come
from
a national survey of employers
with at least 25 employees and significant
hiring of recent college graduates, released
Tuesday by the Association of American
Colleges and Universities. Over all, 65
percent of those surveyed believe that new
graduates of four-year colleges have most or
all of the skills to succeed in entry-level
positions, but only 40 percent believe that
they have the skills to advance.
. .
.
In
terms of specific skills, the employers didn’t give many A’s
or fail many either. The employers were asked to rank new
graduates on 12 key areas, and the grads did best in
teamwork, ethical judgments and intercultural work, and
worst in global knowledge, self-direction and writing.
Employers Ratings of College
Graduates Preparedness on 1-10 Scale
Category |
Mean
Rating |
%
giving high (8-10) rating |
%
giving low (1-5) rating |
Teamwork |
7.0 |
39% |
17% |
Ethical judgment |
6.9 |
38% |
19% |
Intercultural skills |
6.9 |
38% |
19% |
Social responsibility |
6.7 |
35% |
21% |
Quantitative reasoning |
6.7 |
32% |
23% |
Oral communication |
6.6 |
30% |
23% |
Self-knowledge |
6.5 |
28% |
26% |
Adaptability |
6.3 |
24% |
30% |
Critical thinking |
6.3 |
22% |
31% |
Writing |
6.1 |
26% |
37% |
Self-direction |
5.9 |
23% |
42% |
Global knowledge |
5.7 |
18% |
46% |
To
the extent that employers give graduates mixed grades, that
raises the question of how they determine who is really
prepared. Many of the existing tools appear to be
insufficient, the poll found.
Continued in
article
Jensen Comment
This study is misleading in the sense that large employers generally hire
above-average graduates. This skews the results upward with respect to the
entire population of college graduates. Colleges have a long way to go in modern
times.
Bob Jensen's threads higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
January 6, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
No Significant Difference Phenomenon website
http://www.nosignificantdifference.org/
The website is a companion piece to Thomas L.
Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of
355 research reports, summaries, and papers that document no significant
differences in student outcomes between alternate modes of education
delivery.
International Society for Technology in Education --- http://www.iste.org/
ISTE
is a nonprofit professional organization with a worldwide membership
of leaders and potential leaders in educational technology. We are
dedicated to providing leadership and service to improve teaching and
learning by advancing the effective use of technology in K–12
education and teacher education. We provide our members with
information, networking opportunities, and guidance as they face the
challenge of incorporating computers, the Internet, and other new
technologies into their schools.
Home
of the National Educational Technology Standards (NETS), the Center
for Applied Research in Education Technology (CARET), and the National
Educational Computing Conference (NECC), ISTE meets its mission
through knowledge generation, professional development, and advocacy.
ISTE also represents and informs its membership regarding educational
issues of national scope through ISTE–DC. We support a worldwide
network of Affiliates and Special Interest Groups (SIGs), and we offer
our members the latest information through our periodicals and
journals.
|
|
An
organization of great diversity, ISTE leads through presenting innovative
educational technology books and programs; conducting professional development
workshops, forums, and symposia; and researching, evaluating, and
disseminating findings regarding educational technology on an international
level. ISTE’s Web site, www.iste.org,
contains coverage of many topics relevant to the educational technology
community.
Bookstore. L&L. NECC, NETS. About ISTE, Educator Resources, Join!,
Membership, Affiliates
ISTE 100, SIGs, Professional Development, Publications, Research Projects,
Standards Projects, Site Map
"Surveying the Digital Landscape: Evolving Technologies 2004," Educause
Review, vol. 39, no. 6 (November/December 2004): 78–92. --- http://www.educause.edu/apps/er/erm04/erm0464.asp
Each year, the members of the EDUCAUSE Evolving
Technologies Committee identify and research the evolving technologies that
are having the most direct impact on higher education institutions. The
committee members choose the relevant topics, write white papers, and present
their findings at the EDUCAUSE annual conference.
December 9, 2004 message from Ed Scribner [escribne@nmsu.edu]
Bob,
Thanks for that EDUCASE link. Who among us old-timers
from the mainframe BITNET days would have predicted that “spam management”
would top the list of influential campus technologies in 2004?
While following the link you sent, I noticed that
Wesleyan has a nicely crafted set of assessment links that you probably
already have, but it didn’t turn up in my search of trinity.edu:
Information Technology Services Assessment --- http://www.wesleyan.edu/its/acs/assessment.htt
Ed Scribner
New Mexico State
Bob Jensen discusses the long term future of education technologies at http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#Future
From T.H.E. Journal, April 2004 --- http://www.thejournal.com/magazine/vault/M2664.cfm
"High (School)-Tech: The Effect of Technology on Student Achievement in
Grades 7-12," by Neal Starkman, T.H.E.'s The Focus Newsletter, April
15, 2004 --- http://www.thejournal.com/thefocus/37.cfm
In Lincolnshire, Ill., teachers at Adlai E. Stevenson
High School are mandated to be proficient in the “operation and
conceptualization of hardware and networks, applications, information tools,
and presentation tools.”
In Scott County, Ky., students throughout the school
district participate in a Digital Storytelling Project. The project lets
students select an appropriate story, restructure the story in response to a
“seven elements” model, create storyboards, gather content, produce
videos, and share them at a Digital Storytelling Festival.
In Granger, Ind., eighth-grade students at Discovery
Middle School produce a seven-minute news broadcast every morning. They make
the assignments; organize the crew; set up camera equipment; block the shots;
instruct others, including adults, in their roles in the production; and read
the news.
And in Redmond, Wash., Tom Charouhas, a science
teacher at Rose Hill Junior High School, uses “probeware” to show students
how to determine the force needed to maintain mechanical efficiency in pulley
systems. By using probeware, students can actually see the results of their
actions on numerous pulleys.
What's going on here?
It's technology in the classroom: word processing
programs, e-mail, databases and spreadsheets, modeling software,
closed-circuit television, computer networks, CD-ROM encyclopedias, network
search tools, desktop publishing, videotape recording and editing equipment,
and the list goes on and on. What the chalkboard was to the 20th-century
classroom, the computer is to the 21st-century classroom. The one important
difference is that the concept of the chalkboard didn't change much over the
decades; however, we're just at the beginning of the evolution of the computer
as a teaching and learning tool.
Lake Washington School District, which includes Tom
Charouhas' school and 41 others, is a good example of how far technology has
traveled in schools. The North Central Regional Educational Laboratory (NCREL),
online at http://www.ncrel.org, reports that the district started wiring its
schools back in 1989. Today, there is a computer for every four students, and
the district even has its own channel on cable TV. The district is also
committed to renewing its hardware every five years for desktops and every
four years for laptops, in addition to training all of its 1,300 teachers (no
teacher proficiency, no computer upgrade). Charouhas has seen a “slow and
steady climb” in not only the expertise of teachers in technology but also,
and much more importantly, the expertise of students. “You can talk about
concepts until you're blue in the face,” he says, but he believes that
students really learn the science when they actually do the science.
But, is it as easy as that? Is it just a matter of
“wiring”? The Center for Applied Research in Educational Technology
(CARET), online at http://caret.iste.org, has compiled evidence on just what
impact technology has had on student performance. It's concluded that
technology improves student performance when the application has the following
characteristics:
Curriculum. It directly supports the curriculum
objectives being assessed.
Collaboration. It provides opportunities for
student collaboration.
Feedback. It adjusts for student ability and prior
experience, and provides feedback to the student and teacher about student
performance or progress with the application.
Integration. It is integrated into the typical
instructional day.
Assessment. It provides opportunities for students
to design and implement projects that extend the curriculum content being
assessed by a particular standardized test.
Support. It is used in environments where teachers,
the school community, and school and district administrators support the use
of technology.
None of this, of course, should be surprising. As
Charouhas says, “The use of the technology cannot supersede the content…
[and] the most important [component] of any classroom is the teacher.”
Elliot Wolfe can attest to that. Wolfe, a senior at
Seattle's Garfield High School, takes classes at Seattle Central Community
College as part of a program called Running Start. On March 11, he made a
presentation on native Catholic boarding schools using PowerPoint and an LCD
projector. His 18 slides included photographs and facts about the nature of
the classes in boarding schools, where the schools were located, and how many
students attended each over a period of time. He used the slides to illustrate
the main points and then orally elaborated on them over the course of about 10
minutes.
Was it effective? Sure. All of us, including
students, learn in various ways (e.g., auditorily, visually, kinesthetically),
and the more of those ways a teacher can employ, the greater the chance of
learning. But is it a panacea?
Continued in the article
"Technology's Impact on Academic Achievement," by Samuel Besalel,
T.H.E. Journal, January 22, 2004 --- http://www.thejournal.com/thefocus/33.cfm
This issue is the first of two articles that focus on
the impact of technology on academic achievement. When examining technology's
contributions to education, age is not a factor. Throughout every age group,
students benefit from technology in the classroom.
There is a wide range of technology used in schools
today, from desktop computers in classrooms and labs to digital whiteboards,
digital projectors, laptop computers, wireless network technologies, devices
for special needs populations, and more.
In this issue, we will focus on how technology
increases classroom efficiency and facilitates learning in educational
settings from kindergarten through grade six.
We will also examine the kind of changes in student
learning that occur as a direct result of technology.
The Link Between Technology and Achievement
Technology in the classroom directly contributes to student achievement, both
by making students more effective in their learning and teachers more
efficient in their teaching.
Students are attracted to the use of computers, even
for such mundane applications as playing math games and reading online books.
But when used in this manner, don't they simply replace other possible
teaching methods or learning tools? Are there really advantages to such uses?
Actually, yes. Particularly in primary grades,
computers help to reinforce many basic skills. While a teacher might find it
hard to sustain a child's attention to teach and re-teach math facts, or the
spelling of the days of the week, students are much more tolerant of
repetition from a computer program; in fact, they come to expect it. This is
good news, because repetition is essential in areas such as beginner reading
and the learning of almost any fact.
For example, students playing a math game can feel
challenged by "beating the high score," making the learning of math
both competitive and fun, while encouraging additional practice and drilling
of facts.
Advantages are also to be had with online books.
Efficient reading goes beyond being able to recognize letters and words.
Phrasing is a key aspect of what good readers do. Many online book programs
not only display the words of a book, with pictures or animations, but also
include both an audio component and highlighting of phrases as the narrator
works through the text. This provides an accurate model of what good readers
do, helping to build fluent reading skills.
Teacher innovation has never been in short supply.
The innovative approaches educators use to leverage technology to the benefit
of their students is often more impressive than the technology itself.
With the appropriate targeting and application of
technology, substantial gains can be made for student achievement. Various
applications of technology can be effective when targeting primary school
students to introduce logical concepts, mathematical equations, and cause and
effect.
For example, I've witnessed effective lessons
presented in a computer lab to 20 or more students using only a single PC and
a digital projector.
Because people need to learn how to learn, computer
interfaces often pose problems to older learners. This is often not the case
with young students with fiercely inquisitive minds. Presented as play, I
observed how kindergarteners were cannily introduced to methods to approach
software programs. Using The Learning Company's Kid Pix ( http://www.kidpix.com
), the instructor quizzed students on their knowledge of seasons, nature and
animals. Together (with the instructor "driving"), they composed a
thematic painting for the fall harvesting season. The children observed how
various menus of related objects were stored. Using the objects to simulate
rubber stamps, together they designed a picture that used their current
knowledge and eased them into more information. In the process, they learned
to group relationships of animals and plants in higher and lower order (i.e.,
animals, animals with four legs, mammals) and were introduced to computer
terminology such as select, delete, edit, click, and so forth.
Continued in the article
"Evaluating the Impact of Technology: The Less Simple Answer," by
Doug Johnson, Educational Technology Journal, January/February 1996 --- http://www.fno.org/jan96/reply.html
From the National School Boards Association --- http://www.nsba.org/sbot/toolkit/tiol.html
From a Department of Education 1995
forum, some panelists contended that rather than debating the connections
between technology-based instruction and test scores, schools should focus on
the most obvious and compelling reason form implementing technology-namely,
that students need strong technology skills to succeed in the world of work.
This section will provide you with the impact technology has on learning.
You can find the following in this section:
ED Report The Costs and Effectiveness of
Educational Technology
"Through the use of advanced computing and telecommunications
technology, learning can also be qualitatively different. The process
of learning in the classroom can become significantly richer as
students have access to new and different types of information, can
manipulate it on the computer through graphic displays or controlled
experiments in ways never before possible, and can communicate their
results and conclusions in a variety of media to their teacher,
students in the next classroom, or students around the world. For
example, using technology, students can collect and graph real-time
weather, environmental, and populations data from their community, use
that data to create color maps and graphs, and then compare these maps
to others created by students in other communities. Similarly, instead
of reading about the human circulatory system and seeing textbook
pictures depicting bloodflow, students can use technology to see blood
moving through veins and arteries, watch the process of oxygen
entering the bloodstream, and experiment to understand the effects of
increased pulse or cholesterol-filled arteries on blood flow."
(page 16)
"We know now - based on decades of use in schools, on findings
of hundreds of research studies, and on the everyday experiences of
educators, students, and their families - that, properly used,
technology can enhance the achievement of all students, increase
families’ involvement in their children’s schooling, improve
teachers’ skills and knowledge, and improve school administration
and management."
|
TechKnowLogia --- http://www.techknowlogia.org/
TechKnowLogia
is an international online journal that provides policy makers, strategists,
practitioners and technologists at the local, national and global levels with a
strategic forum to:
Explore the vital role of
different information technologies (print, audio, visual and digital) in the
development of human and knowledge capital;
Share policies, strategies,
experiences and tools in harnessing technologies for knowledge
dissemination, effective learning, and efficient education services;
Review the latest systems and
products of technologies of today, and peek into the world of tomorrow; and
Exchange information about
resources, knowledge networks and centers of expertise.
- Do Technologies
Enhance Learning?
- Brain Research,
Learning and Technology
- Technologies
at Work for: Critical Thinking, Science Instruction, Teaching
Practices, etc...
- Interactive TV as
an Educational Tool
- Complexity of
Integrating ICTs into Curriculum & Exams
- Use of Digital
Cameras to Enhance Learning
- Creating
Affordable Universal Internet Access
Bob Jensen's threads on education technologies are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Reading on Line (for the K12
Teachers and Students) --- http://www.readingonline.org/
Reading Online
(ROL) is a peer-reviewed journal of the International
Reading Association (IRA). Since its launch in May 1997 it has become a
leading online source of information for the worldwide literacy-education
community, with tens of thousands of accesses to the site each month.
The journal focuses
on literacy practice and research in classrooms serving students aged 5 to 18.
“Literacy” is broadly defined to include traditional print literacy, as
well as visual literacy, critical literacy, media literacy, digital literacy,
and so on. A special mission of the journal is to support professionals as
they integrate technology in the classroom, preparing students for a future in
which literacy’s meaning will continue to evolve and expand.
The journal is guided
by an editorial
council whose members adjudicate manuscripts submitted for peer review. In
addition to articles, ROL includes invited features, online versions of
content from IRA’s peer-reviewed print journals, and reports and other
documents of interest to the worldwide literacy education community.
Important Distance
Education Site
The Sloan Consortium --- http://www.aln.org/
The purpose of the Sloan Consortium (Sloan-C) is to help learning organizations
continually improve quality, scale, and breadth according to their own
distinctive missions, so that education will become a part of everyday life,
accessible and affordable for anyone, anywhere, at any time, in a wide variety
of disciplines.
From Syllabus News on October 14, 2003
Online University Consortium Releases Learner
Assessment Tool
A network of universities founded to help companies
and employees secure a quality online education, announced a Web-based
assessment tool for prospective students considering online degree programs.
The Online Learner Assessment, unveiled by the Online University Consortium,
helps students determine their aptitude for online education in order to
choose the best source for their individual learning style. The tool helps
Online UC to match learners with qualified degree programs.
"The tool helps learners avoid costly mistakes
by making the best education choice for their individual needs," said
Greg Eisenbarth, Online UC's executive director. "This allows targeted
development and enhances ROI for corporations funding employee training."
Read more: http://info.101com.com/default.asp?id=3157
Thinking About Assessment: Assessment is education's new
apple-pie issue. Unfortunately, the devil is in the details, by Kenneth
C. Green - August 2001 --- http://www.convergemag.com/magazine/story.phtml?id=3030000000002596
Assessment has become the big thing. President
Clinton supported assessment. President Bush supports assessment. It seems
like every member of Congress favors assessment. So too, it seems, do all
the nation's governors, and almost every elected state and local official
-- school board members, city council members, mayors, city attorneys,
sheriffs, county commissioners, park commissioners, and more.
The CEOs of major U.S. companies want more
assessment. Moreover, many school superintendents, like Education Secre
tary Rod Paige, former superintendent of the Houston Independent School
District, also support assessment.
Assessment is education's new apple pie issue.
Everyone supports efforts to improve education; and everyone seems to
believe more assessment will help improve education.
It's just grand that many people in so many
elected and administrative offices support assessment.
There is, however, one little problem: getting
all these individuals to agree on how and what to assess and how to use
the data. They all agree about the need for more assessment.
Unfortunately, the devil is in the details.
It may be a stretch, but I see some striking
similarities in the public conversation about technology and assessment.
First, well-informed folks -- some in education,
some not -- believe that more assessment will improve education.
Similarly, many people -- some who are educators and many others who
simply care about education -- believe that more technology will improve
education.
Second, assessment costs lots of money. One
dimension of the discussion underway in Congress and in state capitols
involves how much money to spend on assessment. Similarly, one dimension
of the continuing conversation about technology in schools and colleges is
about the costs.
Third, it seems like everyone has strong opinions
about assessment. Moreover, anyone with an opinion becomes an immediate
expert. Similarly, it seems like everyone has strong opinions about
technology. Moreover, like opinions about assessment, anyone with an
opinion about technology believes it is an expert opinion. In an
interesting and important twist on Cartesian logic, we are all sum ergo
experts on both assessment and technology.
Finally, as an acknowledged sum ergo expert, let
me suggest an additional similarity: Those who profess great faith in the
power of assessment or technology to enhance education may be engaged in
just that -- an act of faith!
Wait, please. Let me explain. I believe in
assessment. I believe in technology. But I also believe in research. And
while I know a little less about the assessment literature and a little
more about the technology literature, I do know enough about both to know
that the research literature in both areas is often ambiguous.
Indeed, advocates for both assessment and for
technology often have to confront the "no
significant differences" question.
For those of you who missed statistics in college, this means that at the
end of the day, does the treatment (the intervention) generate a
statistically significant difference in outcomes or performance?
Here, the hard questions are about learning
outcomes. Let's frame the questions as hypotheses in a doctoral
dissertation:
H1: Assessment contributes to enhanced learning
outcomes for individual students.
H2: Assessment contributes to the enhanced
performance of schools and colleges.
H3: Technology contributes to enhanced learning
outcomes for individual students.
H4: Technology contributes to the enhanced
performance of schools and colleges.
You may take issue with the academic
presentation. However, in the context of the public discussions, as well
as public policy and educational planning, these are the core issues: Do
assessment and technology contribute to enhanced student learning and to
the enhanced performance of schools and colleges?
Alas, we don't really know. We think we know. We
draw on personal experience as hard data. We accept anecdote and
testimonial as evidence of impacts. But the hard research evidence remains
elusive; the aggregated research is ambiguous.
Indeed, it may well be a good (and obvious)
"intervention," as suggested by President Bush and others, to
conduct annual "reading and math assessments [to] provide parents
with the information they need, to know how well their child is doing in
school, and how well the school is educating their child." But we
really do not know if this will make a difference in educational
experiences of students or the effectiveness of individual schools.
Also see http://www.campuscomputing.net/
Controversies Regarding Pedagogy
"No Lectures or Teachers, Just Software," by
Joshua Green, The New York Times, August 10, 2001 --- http://www.nytimes.com/library/tech/00/08/circuits/articles/10prof.html
The aim is to get students to delve into a
course's volumes of academic information, including hours of videotape of
experts in a field related to the program. Students running Krasnovia, for
example, can draw on video advice from Thomas Boyatt, a former ambassador,
and Bruce Laingen, an American diplomat who was held hostage in Iran and
is president of the American Academy of Diplomacy.
Rather than subject students to full-blown
lectures, Dr. Schank breaks the video into snippets that address only the
question at hand. He believes students learn more effectively through this
piecemeal approach, which he calls "just in time" learning.
"The value of the computer is
that it allows kids to learn by doing," he said. "People don't
learn by being talked at. They learn when they attempt to do something and
fail. Learning happens when they try to figure out why."
Bald, bearded and powerfully built, Dr. Schank's
appearance and demeanor suggest Marlon Brando in the movie
"Apocalypse Now." His professional reputation is somewhat
similar. His brusque manner and outspoken criticism of those he disagrees
with have alienated some colleagues and earned him the reputation of
iconoclast. But his success in designing teaching software has made him a
much sought after figure among businesses, military clients and
universities.
His company puts extraordinary effort into
creating software courses, each of which can take up to a year to design
and can cost up to $1 million. Video is an important component of Dr.
Schank's program. After interviewing professors, his staff develops a
story, writes a script, hires professional actors and begins filming.
Cognitive Arts even arranged the use of CNN footage of the Bosnian
conflict to lend the aura of authenticity to Crisis in Krasnovia.
The programs allow students to progress at their
own pace. Dr. Schank says the semester system is badly outdated, a view he
also holds for most tests, which foster only temporary memorization, he
says. His programs require students to write detailed reports on what they
have learned. A student who cuts corners does not finish the course, and
the failing grade is delivered in the spirit of a video game. In Krasnovia,
for instance, an incomplete report would draw a mock newscast in which
commentators ridicule the president's address. Students must then go back
and improve their work.
These multimedia simulations differ radically
from current online offerings. "When you look at online courses now,
what do you see?" Dr. Schank said. "Text online with a quiz.
We're not taking a lecture and putting it on screen. We're restructuring
these courses into goal-based scenarios that will get kids excited."
Dr. Schank says that such courses will render
traditional classes -- and many professors -- obsolete. "The idea of
one professor for one class is ancient," he said. "New
technology is going to give every student access to the best professors in
the world."
But many academics dismiss Dr. Schank's
prediction that traditional teaching methods will soon become obsolete and
question software learning's pedagogic value. "Education depends on
relationships between people," said David F. Noble, a history
professor at York University in Toronto and a critic of online learning.
"Interactive is not the same as interpersonal. What Schank doesn't
recognize is that teaching is not just about relaying knowledge."
Others warn against accepting radical new
technology without pause. "The American university system is a highly
functional institution," said Phil Agre, an associate professor of
information studies at the University of California at Los Angeles.
"The danger is that we will apply overly simplistic ideas about
technology and tear apart the institution before we really know what we're
doing."
Related evidence on impact of removing lectures
from course is found in the BAM project described at http://faculty.trinity.edu/rjensen/265wp.htm
October 8, 2003 message from Laurie Padgett [padgett8@BELLSOUTH.NET]
Lauretta,
Yes it was live chat (synchronous) using voice
(which also had a text chat box). In s particular class we would meet
every other week in the evening around 7/8. I think they lasted 1 hr to 1
1/2 hr (I can not recall exactly). I took two classes a semester so I
would attend two live chats for every two weeks. The instructors would
coordinate to ensure they would not plan the class for the same evening.
In addition to the live chat, we also used another program that I just can
not remember the name of it (I think it might have been called Placeware).
It was really neat because it looked like an auditorium and you were a
little character (or may I say a colored dot). You could raise your hand,
ask a question, type text, etc. We would use the chat program where he
would talk as he conducted the presentation in the other program. If you
had a question you would raise your hand & then use the live chat to
talk. The program was starting to get more advanced as I graduated.
The Master's of Accounting program that I went
through (as I understand it from the professor I had) was one of the first
to go online for this particular program. I was in the first graduating
class which started April of 2000 and completed September 2001. I attended
Nova Southeastern University in Florida. ( http://emacc.huizenga.nova.edu/
)
I know that some feel that live chat
(synchronous) might not work due to time zones and some feel that the text
works just as well. From my personal experience and opinion I feel that a
Master's program in "Accounting" needs more than just text
written but interaction between your fellow classmates too. I feel it was
more productive because it is like you are sitting in a class listening to
the instructor and you have the opportunity to ask a question by typing in
the box & then the instructor sees it & answers it with his voice.
Additionally, you cover much more subject area than you can with a text
chat. It really worked well.
Again, these are my opinions and each person has
his own. This is what makes us unique.
Laurie
-----Original Message-----
Subject: Re: peer evaluation of a web-based course
Laurie:
When you say "live" chat, are you
referring to the chats in which all students come together at the same
time (synchronous)? I tried to initiate this type of chat in my online
class and found students's schedules to be an issue.
Has anyone tried putting students into groups
to do synchronous chatting about assignments? How did this work for your
class?
Lauretta A. Cooper, MBA, CPA
Delaware Technical & Community College Terry Campus
"Seven
Principles of Effective Teaching: A Practical Lens for Evaluating Online
Courses"
by Charles Graham, Kursat Cagiltay, Byung-Ro Lim, Joni Craner and Thomas M.
Duffy
Assessment, March/April 2001 --- http://horizon.unc.edu/TS/default.asp?show=article&id=839
Reproduced below with permission.
The "Seven
Principles for Good Practice in Undergraduate Education," originally
published in the AAHE Bulletin (Chickering & Gamson, 1987), are
a popular framework for evaluating teaching in traditional, face-to-face
courses. The principles are based on 50 years of higher education research (Chickering
& Reisser, 1993). A faculty inventory (Johnson Foundation,
"Faculty," 1989) and an institutional inventory (Johnson
Foundation, "Institutional," 1989) based on these principles have
helped faculty members and higher-education institutions examine and improve
their teaching practices.
We, a team of five
evaluators from Indiana University's Center for Research on Learning and
Technology (CRLT), recently used these principles to evaluate four online
courses in a professional school at a large Midwestern university. (The
authors are required to keep the identity of that university
confidential.—Ed.) The courses were taught by faculty members who also
taught face-to-face courses. Conducted at the joint request of faculty and
administration, the evaluations were based on analysis of online course
materials, student and instructor discussion-forum postings, and faculty
interviews. Although we were not permitted to conduct student interviews
(which would have enriched the findings), we gained an understanding of
student experiences by reading postings to the discussion forum.
Taking the perspective
of a student enrolled in the course, we began by identifying examples of
each of Chickering and Gamson's seven principles. What we developed was a
list of "lessons learned" for online instruction that correspond
to the original seven principles. Since this project involved practical
evaluations for a particular client, they should not be used to develop a
set of global guidelines. And since our research was limited in scope and
was more qualitative than quantitative, the evaluations should not be
considered a rigorous research project. Their value is to provide four case
studies as a stimulus for further thought and research in this direction.
Principle 1:
Good Practice Encourages Student-Faculty Contact
Lesson for
online instruction: Instructors should provide clear guidelines for
interaction with students.
Instructors wanted to be
accessible to online students but were apprehensive about being overwhelmed
with e-mail messages or bulletin board postings. They feared that if they
failed to respond quickly, students would feel ignored. To address this, we
recommend that student expectations and faculty concerns be mediated by
developing guidelines for student-instructor interactions. These guidelines
would do the following:
- Establish policies describing
the types of communication that should take place over
different channels. Examples are: "Do not send technical support
questions to the instructor; send them to techsupport@university.edu."
Or: "The public discussion forum is to be used for all
communications except grade-related questions."
- Set clear standards for
instructors' timelines for responding to messages. Examples:
"I will make every effort to respond to e-mail within two days of
receiving it" or "I will respond to e-mails on Tuesdays and
Fridays between three and five o'clock."
Principle 2:
Good Practice Encourages Cooperation Among Students
Lesson for
online instruction: Well-designed discussion assignments facilitate
meaningful cooperation among students.
In our research, we
found that instructors often required only "participation" in the
weekly class discussion forum. As a result, discussion often had no clear
focus. For example, one course required each of four students in a group to
summarize a reading chapter individually and discuss which summary should be
submitted. The communication within the group was shallow. Because the
postings were summaries of the same reading, there were no substantive
differences to debate, so that discussions often focused on who wrote the
most eloquent summary.
At the CRLT, we have
developed guidelines for creating effective asynchronous discussions, based
on substantial experience with faculty members teaching online. In the
study, we applied these guidelines as recommendations to encourage
meaningful participation in asynchronous online discussions. We recommended
the following:
- Learners should be required to
participate (and their grade should depend on participation).
- Discussion groups should remain
small.
- Discussions should be focused on
a task.
- Tasks should always result in a
product.
- Tasks should engage learners in
the content.
- Learners should receive feedback
on their discussions.
- Evaluation should be based on
the quality of postings (and not the length or number).
- Instructors should post
expectations for discussions.
Principle 3:
Good Practice Encourages Active Learning
Lesson for
online instruction: Students should present course projects.
Projects are often an
important part of face-to-face courses. Students learn valuable skills from
presenting their projects and are often motivated to perform at a higher
level. Students also learn a great deal from seeing and discussing their
peers' work.
While formal
synchronous presentations may not be practical online, instructors can still
provide opportunities for projects to be shared and discussed
asynchronously. Of the online courses we evaluated, only one required
students to present their work to the class. In this course, students
presented case study solutions via the class Web site. The other students
critiqued the solution and made further comments about the case. After all
students had responded, the case presenter updated and reposted his or her
solution, including new insights or conclusions gained from classmates. Only
at the end of all presentations did the instructor provide an overall
reaction to the cases and specifically comment about issues the class
identified or failed to identify. In this way, students learned from one
another as well as from the instructor.
Principle 4:
Good Practice Gives Prompt Feedback
Lesson for
online instruction: Instructors need to provide two types of
feedback: information feedback and acknowledgment feedback.
We found during the
evaluation that there were two kinds of feedback provided by online
instructors: "information feedback" and "acknowledgement
feedback." Information feedback provides information or evaluation,
such as an answer to a question, or an assignment grade and comments.
Acknowledgement feedback confirms that some event has occurred. For example,
the instructor may send an e-mail acknowledging that he or she has received
a question or assignment and will respond soon.
We found that
instructors gave prompt information feedback at the beginning of the
semester, but as the semester progressed and instructors became busier, the
frequency of responses decreased, and the response time increased. In some
cases, students got feedback on postings after the discussion had already
moved on to other topics. Clearly, the ideal is for instructors to give
detailed personal feedback to each student. However, when time constraints
increase during the semester's busiest times, instructors can still give
prompt feedback on discussion assignments by responding to the class as a
whole instead of to each individual student. In this way, instructors can
address patterns and trends in the discussion without being overwhelmed by
the amount of feedback to be given.
Similarly, we found
that instructors rarely provided acknowledgement feedback, generally doing
so only when they were behind and wanted to inform students that assignments
would be graded soon. Neglecting acknowledgement feedback in online courses
is common, because such feedback involves purposeful effort. In a
face-to-face course, acknowledgement feedback is usually implicit. Eye
contact, for example, indicates that the instructor has heard a student's
comments; seeing a completed assignment in the instructor's hands confirms
receipt.
Principle 5:
Good Practice Emphasizes Time on Task
Lesson for
online instruction: Online courses need deadlines.
One course we evaluated
allowed students to work at their own pace throughout the semester, without
intermediate deadlines. The rationale was that many students needed
flexibility because of full-time jobs. However, regularly-distributed
deadlines encourage students to spend time on tasks and help students with
busy schedules avoid procrastination. They also provide a context for
regular contact with the instructor and peers.
Principle 6:
Good Practice Communicates High Expectations
Lesson for
online instruction: Challenging tasks, sample cases, and praise for
quality work communicate high expectations.
Communicating high
expectations for student performance is essential. One way for instructors
to do this is to give challenging assignments. In the study, one instructor
assigned tasks requiring students to apply theories to real-world situations
rather than remember facts or concepts. This case-based approach involved
real-world problems with authentic data gathered from real-world situations.
Another way to
communicate high expectations is to provide examples or models for students
to follow, along with comments explaining why the examples are good. One
instructor provided examples of student work from a previous semester as
models for current students and included comments to illustrate how the
examples met her expectations. In another course, the instructor provided
examples of the types of interactions she expected from the discussion
forum. One example was an exemplary posting while the other two were
examples of what not to do, highlighting trends from the past that
she wanted students to avoid.
Finally, publicly
praising exemplary work communicates high expectations. Instructors do this
by calling attention to insightful or well-presented student postings.
Principle 7:
Good Practice Respects Diverse Talents and Ways of Learning
Lesson for
online instruction: Allowing students to choose project topics
incorporates diverse views into online courses.
In several of the
courses we evaluated, students shaped their own coursework by choosing
project topics according to a set of guidelines. One instructor gave a
discussion assignment in which students researched, presented, and defended
a current policy issue in the field. The instructor allowed students to
research their own issue of interest, instead of assigning particular
issues. As instructors give students a voice in selecting their own topics
for course projects, they encourage students to express their own diverse
points of view. Instructors can provide guidelines to help students select
topics relevant to the course while still allowing students to share their
unique perspectives.
Conclusion
The "Seven
Principles of Good Practice in Undergraduate Education" served as a
practical lens for our team to evaluate four online courses in an accredited
program at a major U.S. university. Using the seven principles as a general
framework for the evaluation gave us insights into important aspects of
online teaching and learning.
A comprehensive report of the
evaluation findings is available in a CRLT technical report (Graham, et al.,
2000).
References
Chickering, A., &
Gamson, Z. (1987). Seven principles of good practice in undergraduate
education. AAHE Bulletin, 39, 3-7.
Chickering, A., &
Reisser, L. (1993). Education and identity. San Francisco: Jossey-Bass.
Graham, C., Cagiltay,
K., Craner, J., Lim, B., & Duffy, T. M. (2000). Teaching in a
Web-based distance learning environment: An evaluation summary based on four
courses. Center for Research on Learning and Technology Technical Report No.
13-00. Indiana University Bloomington. Retrieved September 18, 2000
from the World Wide Web: http://crlt.indiana.edu/publications/crlt00-13.pdf
Principles for good
practice in undergraduate education: Faculty inventory. (1989). Racine,
WI: The Johnson Foundation, Inc.
Principles for good
practice in undergraduate education: Institutional inventory. (1989).
Racine, WI: The Johnson Foundation, Inc.
A comprehensive report of the evaluation findings is
available on the Web (in PDF format) at http://crlt.indiana.edu/publications/crlt00-13.pdf
Teaching at an Internet Distance: the Pedagogy of Online Teaching and
Learning The Report of a 1998-1999 University of Illinois Faculty Seminar
--- http://www.vpaa.uillinois.edu/tid/report/tid_report.html
In response to faculty concern about the
implementation of technology for teaching, a year-long faculty seminar was
convened during the 1998-99 academic year at the University of Illinois.
The seminar consisted of 16 members from all three University of Illinois
campuses (Chicago, Springfield, and Urbana-Champaign) and was
evenly split, for the sake of scholarly integrity, between
"skeptical" and "converted" faculty.
The seminar focused almost entirely on
pedagogy. It did not evaluate hardware or software,
nor did it discuss how to provide access to online courses or how to keep
them secure. Rather, the seminar sought to identify what made teaching to
be good teaching, whether in the classroom or online. External speakers at
the leading edge of this discussion also provided pro and con views.
The seminar concluded that online teaching and
learning can be done with high quality if new approaches are employed
which compensate for the limitations of technology, and if
professors make the effort to create and maintain the human touch of
attentiveness to their students. Online
courses may be appropriate for both traditional and non-traditional
students; they can be used in undergraduate education, continuing
education, and in advanced degree programs. The seminar participants
thought, however, that it would be inappropriate to provide an entire
undergraduate degree program online. Participants concluded that the
ongoing physical and even emotional interaction between teacher and
students, and among students themselves, was an integral part of a
university education.
Because high quality online teaching is time and
labor intensive, it is not likely to be the income source envisioned by
some administrators. Teaching the same number of students online at the
same level of quality as in the classroom requires more time and money.
From our fundamental considerations of pedagogy
we have prepared a list of practice-oriented considerations for professors
who might be interested in teaching online, and another list for
administrators considering expanding online course offerings.
Practical Considerations for Faculty:
Whom do I teach? (Sections 2,3) The fraction of
"nontraditional" students is not as high as some make it out to
be, but is still significant. Stemming from the baby boomlet, the number
of young, "traditional" students will be as high or higher than
ever through the next decade. Many contexts of online course delivery
given in Table 5, for professional training/continuing education,
undergraduate education, and graduate education for both traditional and
nontraditional students, are viable. There are several exceptions: first,
certain types of advanced graduate work cannot be performed online, and
second, traditional students benefit from the maturing, socializing
component of an undergraduate college education and this requires an
on-campus presence.
How do I teach? (Sections 4,5) Attempts are being
made to use instructional technology such as real-time two-way
videoconferencing in attempts to simulate the traditional classroom. With
improvements in technology this mode may yet succeed, but from what we
have seen, the leaders in this area recommend shifts from
"traditional" teaching paradigms. Two new online paradigms that
appear to work well are text-based computer mediated communication (CMC)
for courses that are traditionally taught in the discussion or seminar
mode, and interactive, graphically based material for courses that are
traditionally taught in the lecture mode. Methods are by no means limited
to these two.
How many do I teach? (Section 5) High quality
teaching online requires smaller student/faculty ratios. The shift from
the classroom to online has been described as a shift from
"efficiency to quality." We also believe a motivational human
touch must come into play as well in the online environment as it does in
the classroom. Students should feel they are members of a learning
community and derive motivation to engage in the material at hand from the
attentiveness of the instructor.
How do I ensure high quality of online teaching?
(Sections 2, 6, 7) Quality is best assured when ownership of developed
materials remains in the hands of faculty members. The University of
Illinois' Intellectual Property Subcommittee Report on Courseware
Development and Distribution recommends that written agreement between the
courseware creator and the administration be made in advance of any work
performed. Evaluation of learning effectiveness is also a means to ensure
high quality. We suggest a broad array of evaluation areas that includes,
but is not limited to, a comparison of learning competence with the
traditional classroom.
Policy Issues for Administrators
How do I determine the worth of teaching
technology? (Sections 1, 2) On any issue involving pedagogy, faculty
members committed to teaching should have the first and last say. On the
other hand, faculty must be held responsible for good teaching. Online
courses should not be motivated by poor instructor performance in large
classes.
How do I encourage faculty to implement
technology in their teaching? (Section 7) Teaching innovation should be
expected, respected, and rewarded as an important scholarly activity. At
the same time, not all classes are amenable to online delivery.
To ensure the quality of a course, it is
essential that knowledgeable, committed faculty members continue to have
responsibility for course content and delivery. Therefore, intellectual
property policies should allow for faculty ownership of online courseware.
The commissioning of courses from temporary instructors should be avoided,
and the university should be wary of partnerships with education providers
in which faculty members have commercial interests.
Will I make money with online teaching? (Sections
3, 5) The scenario of hundreds or thousands of students enrolling in a
well developed, essentially instructor-free online course does not appear
realistic, and efforts to do so will result in wasted time, effort, and
expense. With rare exceptions, the successful online courses we have seen
feature low student to faculty ratios. Those rare exceptions involve
extraordinary amounts of the professor's time. And besides the initial
investment in the technology, technical support for professors and
students and maintenance of hardware and software are quite expensive.
Online teaching has been said to be a shift from
"efficiency" to "quality," and quality usually doesn't
come cheaply. Sound online instruction is not likely to cost less than
traditional instruction. On the other hand, some students may be willing
to pay more for the flexibility and perhaps better instruction of high
quality online courses. This is the case for a growing number of graduate
level business-related schools. However, it is likely that a high number
of "traditional" students, including the baby boomlet, will
continue to want to pay for a directly attentive professor and the
on-campus social experience.
How do I determine if online teaching is
successful? (Sections 5, 6) In the short term, before history answers this
question, we think that a rigorous comparison of learning competence with
traditional classrooms can and should be done. High quality online
teaching is not just a matter of transferring class notes or a videotaped
lecture to the Internet; new paradigms of content delivery are needed.
Particular features to look for in new courses are the strength of
professor-student and student-student interactions, the depth at which
students engage in the material, and the professor's and students' access
to technical support. Evidence of academic maturity, such as critical
thinking and synthesis of different areas of knowledge should be present
in more extensive online programs.
For the complete report, go to http://www.vpaa.uillinois.edu/tid/report/tid_report.html
SOME HELPFUL LINKS
The SCALE experiments at the University of Illinois. You can find a
review and the links at http://faculty.trinity.edu/rjensen/255wp.htm
The LEAD program at the University of Wisconsin. See
http://faculty.trinity.edu/rjensen/book01q1.htm#020901
The Clipper Project at Lehigh University. See
http://clipper.lehigh.edu/
Download Dan Stone's audio and presentation files from http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm
Evaluating Online Educational Materials for Use in Instruction (tremendous
links) --- http://www.ed.gov/databases/ERIC_Digests/ed430564.html
Do you recall the praise that I
lavished on the ethics website of a Carnegie-Mellon University Philosophy
Professor named Robert Cavalier in my March 22, 000 edition of New
Bookmarks? See http://faculty.trinity.edu/rjensen/book00q1.htm#032200
Robert Cavalier now has an article
entitled "Cases, Narratives, and Interactive Multimedia," in Syllabus,
May 2000. pp. 20-22. The online version of the Syllabus article is not
yet posted, but will eventually be available at http://www.syllabus.com/
The purpose of
our evaluation of A Right to Die? The Case of Dax Cowart was
to see if learning outcomes for case studies could be enhanced with the
use of interactive multimedia. My Introduction to Ethics class was
divided into three groups: Text, Film, and CD-ROM. Equal
distribution was achieved by using student scores on previous exams plus
their Verbal SAT scores.
Two graders
were trained and achieved more than 90 percent in grader variabilility.
The results of the students' performance were put through statistical
analysis and the null hypothesis was rejected for the CD/Film and CD/Text
groups. Significant statistical
difference was demonstrated in favor of interactive multimedia.
-
- Microsoft
in Higher Education - Case Studies
- Internet
Connections
The Web of Asynchronous Learning Networks
Asynchronous
Learning Magazine
- Case
Studies In Science Education
- State
of Change: Images of Science Education Reform in Iowa
- Wisconsin
Center for Education Research
- The
Internet and Distance Learning in Accounting Education—IFAC
Good links to education sites http://www.teleport.com/~hadid/bookmark_page.html
US
News Online Comparisons of Programs in Higher Education
- ERIC
#E530
Education
Review: A Journal of Book Reviews
- Assessment
and Accountability Program
- The
"No Significant Difference" Phenomenon (education technology,
history)
- Bibliography
on Evaluating Internet Resources
- Assessing
Child Behavior and Learning Abilities
- Case-Based
Reasoning in the Web
CLAC
1998 Annual Conference at Trinity University
Howard
Gardner: Seven Types of Intelligence
- Welcome
to the ETS Net
- net.wars
/ contents (top site from Mike Kearl)
- Heinemann
Internet Help Subject Guide (Help in Using Search Engines)
- Index
of infobits/text/
- Margaret
Fryatt's Home Page at OISE
FIU
Student Evaluations of Courses
- The
University of Western Ontario Student Evaluations
- Meeting
the Training Challenge
- Net
Search
- Network-Based
Electronic Publishing of Scholarly Works: A Selective Bibliography
- Real
Problems in a Virtual World
- Seeing
Through Computers, Sherry Turkle, The American Prospect
- Technology
for Creating a Paperless Course
- Technology
Review Home Page
- Technology
Review Home Page
- The
Center for Educational Technology Program
- The
Distance Educator
- The
World Lecture Hall
- The
World-Wide Web Virtual Library: Educational Technology (21-May-1996)
- TR:
October '96: Brody
- Math
Forum: Bibliography - Alternative Instruction/Assessment
- Pathways
to School Improvement
- Education
World (tm) Where Educators Go To Learn
The Theory Into Practice Database http://www.gwu.edu/~tip/index.html
The word "metacognition" arises once
again.
"Assessing the Impact of Instructional Technology on Student
Achievement," by Lorraine Sherry, Shelley Billig, Daniel Jesse, and Deborah
Watson-Acosta, T.H.E. Journal, February 2001, pp. 40-43 --- http://www.thejournal.com/magazine/vault/A3297.cfm
Four separate simplified path analysis models were
tested. The first pair addressed process and product outcomes for class
motivation, and the second pair addressed school motivation. The statistically
significant (p < .05) results were as follows:
- Motivation was related to metacognition. The
relationship between class motivation and metacognition was slightly
stronger (R = .307, p < the relationship between school motivation and
metacognition (R = .282, p < .0001).
- The relationship between metacognition and inquiry
learning (Beta = .546, p < .0001) was stronger than the relationship
between metacognition and application of skills (Beta = .282, p <
.0001).
- The relationship between inquiry learning and the
student learning process outcome (Beta = .384, p = .001) was stronger than
the relationship between application of skills and the student learning
process outcome (Beta = -.055, not significant).
- The relationship between application of skills and
the student product outcome (Beta = .371, p = .004) was stronger than the
relationship between inquiry learning and the student product outcome
(Beta = .063, not significant).
Clearly, correlation does not imply causality.
However, when each of these elements was considered as an independent
variable, there was a corresponding change in associated dependent variables.
For example, there was a significant correlation between motivation and
metacognition, indicating that students' enthusiasm for learning with
technology may stimulate students' metacognitive (strategic) thinking
processes. The significant correlations between motivation, metacognition,
inquiry learning, and the student learning process score indicate that
motivation may drive increases in the four elements connected by the first
path. Similarly, the significant correlations between motivation,
metacognition, application of skills, and the student product score indicate
that motivation may drive increases in the four elements connected by the
second path.
Based on the significant correlations of the two
teacher measurements of student achievement with the student survey data,
these data validated the evaluation team's extension of the Developing
Expertise model to explain increases in student performance as a result of
engaging in technology-supported learning activities. Moreover, nearly all
students across the project met the standards for both the teacher-created
student product assessment and the learning process assessment. This indicates
that, in general, the project had a positive impact on student achievement.
Conclusions
These preliminary findings suggest that
teachers should emphasize the use of metacognitive skills, application of
skills, and inquiry learning as they infuse technology into their respective
academic content areas. Moreover, these activities are directly in line with
the Vermont Reasoning and Problem Solving Standards, and with similar
standards in other states. The ISTE/NETS standards for assessment and
evaluation also suggest that teachers:
- Apply technology in assessing student learning of
subject matter using a variety of assessment techniques.
- Use technology resources to collect and analyze
data, interpret results, and communicate findings to improve instructional
practice and maximize student learning.
- Apply multiple evaluation methods to determine
students' appropriate use of technology resources for learning,
communication and productivity.
Rockman (1998) suggests that "A clear assessment
strategy that goes beyond standardized tests enables school leaders,
policymakers, and the community to understand the impact of technology on
teaching and learning." RMC Research Corporation's extension of the
Sternberg model can be used to organize and interpret a variety of student
self-perceptions, teacher observations of student learning processes, and
teacher-scored student products. It captures the overlapping kinds of
expertise that students developed throughout their technology-related
activities.
One of the greatest challenges facing the Technology
Innovation Challenge Grants and the Preparing Tomorrow's Teachers To Use
Technology (PT3) grants is to make a link between educational technology
innovations, promising practices for teaching and learning with technology,
and increases in student achievement. We believe that this model may be
replicable in other educational institutions, including schools, districts,
institutions of higher learning, and grant-funded initiatives. However, to use
this model, participating teachers must be able to clearly identify the
standards they are addressing in their instruction, articulate the specific
knowledge and skills that are to be fostered by using technology, carefully
observe student behavior in creating and refining their work, and create and
benchmark rubrics that they intend to use to evaluate student work.
The word "metacognition" also appears at http://faculty.trinity.edu/rjensen/265wp.htm
LEAD and SCALE for
Evaluation and Assessment of Asynchronous Learning
As summarized in my February 9, 2001 edition of New Bookmarks --- http://faculty.trinity.edu/rjensen/book01q1.htm#020901
The feature of the week is evaluation and assessment of asynchronous learning
network (ALN) courses and technology-aided course materials. The featured
sites are the following:
- SCALE at the University of Illinois --- http://w3.scale.uiuc.edu/scale/
Sloan Center for Asynchronous Learning Environments
The Sloan Center for Asynchronous Learning
Environments (SCALE) was established in March 1995 with a grant from theAlfred
P. Sloan Foundation. The original grant was for three years. It was
aimed at restructuring undergraduate courses to integrate various techniques
associated with asynchronous learning networks (ALN). More specific goals
were to create efficiencies in the educational process (cost, time, faculty
productivity), to increase student retention, and to decrease
time-to-degree. In July 1998 a renewal grant was obtained for an additional
two years. The goals for SCALE under the renewal grant are to continue the
work in promoting ALN, further investigate ways in which ALN can be used to
produce efficiencies in instruction, and experiment with ALN to make this
style of teaching attractive to mainstream faculty.
Also check out the summaries of SCALE by Dan Stone's (Audio and
PowerPoint) at http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm
Bob Jensen's' threads are at http://faculty.trinity.edu/rjensen/255wp.htm
- LEAD at the University of Wisconsin
--- http://www.cae.wisc.edu/~lead/
Learning (through) Evaluation, Adaptation, and Dissemination
(Note that the LEAD Center evaluates learning technologies in general
whether used synchronously or asynchronously.)
The LEAD Center provides support for funded research
projects requiring student assessment activities to evaluate education
reform efforts at both the undergraduate and graduate levels. The Center
helps to enact the Chancellor's UW-Madison "Vision
for the Future" by creating horizontal links for faculty,
departments and programs involved in educational reform projects, and thus
helping the campus to reconceptualize itself as a learning community. The
LEAD Center is the first of its kind in the nation, and is
"faculty-driven" and "student-focused" in its assessment
approach.
Also check the review article by the Director of the LEAD Center,
"How Do You Measure Success? Lessons on Assessment and Evaluation
from the LEAD Center," by Susan B. Millar, Syllabus, February
2001, 11-12. I think the article will one day be posted at http://www.syllabus.com/
What can I do with learning technology that I can't do
now? What are the nuts and bolts of using learning technology?
How can I use learning technology so that my students really learn? --- http://www.wcer.wisc.edu/nise/cl1/
THE LT2 WEB SITE
LEAD researchers also have
enjoyed the opportunity to develop case studies for the College Level One
team's faculty development resource on effective use of learning technology.
This resource, Learning Through Technology (LT2), is located at www.wcer.wisc.edu/nise/cl1/
.
LT2
is designed to answer questions such as, "What can I do with learning
technology that I can't do now? What are the nuts and bolts of using
learning technology? How can I use learning technology so that my
students really learn?" In particular, it seeks to serve SMET
educators who believe it is important to develop the ranks of future
scientists and a technical workforce, prepare teachers to be scientifically
knowledgeable, and help all students become scientifically literate members
of our society by making appropriate--indeed, transformative--use of the new
computer-based learning technologies. The LT2 site is not designed to
serve individuals seeking resources on distance learning or on how to
translate traditional course content into electronic formats. It
offers in-depth case studies, lively first-person accounts, "hallway
conversations" about technology, and links to articles and more
resources, including a taxonomy of learning technologies.
USING
THE LEAD CENTER
The UW-Madison established
the LEAD Center in Fall 1994 to provide third-party evaluation research in
support of educational improvement efforts at both undergraduate and
graduate levels. The center has a client-driven and student-focused
approach to evaluation research. LEAD clients are faculty or staff at
UW-Madison or institutions that are collaborating with the UW.
Furthermore, they are individuals who:
-
Can provide or work with
LEAD to obtain the resources--usually grants--to pay the full cost of
the evaluation research
-
Have well-articulated
goals for deeper and more relevant student learning
-
Are developing and
testing more effective strategies for achieving these goals
-
Are committed to
obtaining and using feedback on student learning experiences and
outcomes to improve teaching and fine-tune goals
-
Seek to understand the
various factors that are necessary to more effectively institutionalize
and disseminate their efforts.
For more information about
LEAD and the evaluation and assessment projects, affiliated with the center,
visit www.cae.wisc.edu/~lead/
.
The
ADEPT Program in the School of Engineering at Stanford University made the world
take notice that all prestigious universities were not going to take the high
road in favor of onsite education with a haughty air of arrogance that their
missions were not to deliver distance education courses. Other prestigious
universities such as Columbia University, Duke University, and the London
School of Economics certainly took notice following the immediate success of
Stanford's ADEPT Program for delivering a prestigious online Masters of
Engineering degree to off-campus students.
Stanford, through Stanford Online, is the first
university to incorporate video with audio, text, and graphics in its distance
learning offerings. Stanford Online also allows students to ask questions or
otherwise interact with the instructor, teaching assistant, and/or other
students asynchronously from their desktop computer. Stanford Online is
credited by many sources as a significant contributor to the growth of Silicon
Valley, and to the competitive technical advantage of companies that
participate in continuing education through distance learning.
Learn More about Stanford Online
Some distance education courses such as the ADEPT
Program at Stanford University are almost entirely asynchronous with neither
face-to-face onsite classes nor online virtual classes. Others like Duke's
Global Executive MBA program are mostly synchronous in online virtual
classes and occasional onsite classes and field trips.
You can read the following about asynchronous learning in the ADEPT program
as reported at http://ww.stanford.edu/history/finalreport.html
Conclusions
In our project proposal, we stated that there were
several potential benefits to the use of asynchronous techniques in education.
These included increased course access for students, increased quality of the
educational experience, and lower costs.
Our experience to date mirrors that of others in that
it clearly demonstrates the value of increased access. This includes not only
students who had no access previously, but also students who used ADEPT to
review material previously accessed by other methods and to enable a certain
amount of schedule flexibility. At the same time, the evidence from our
project suggests that increased access may not be sufficient, by itself, to
justify the cost of providing asynchronous courses to those with other
options. This conclusion is, of course, restricted to our particular student
body which is composed of high-performing graduate students in technical
disciplines who are fortunate enough in most cases to have a variety of
options for accessing educational material.
Results from our project suggest that to raise the
quality of the educational experience, significant changes in pedagogy will be
necessary. Our belief is that the key to this
is to find ways to exploit the ability of the technologies to provide a more
flexible learning experience. The flexibility of time-on-task provided by
asynchronous techniques is obvious. However, other dimensions of flexibility
might include flexibility of media (text vs. graphics vs. audio/video for
example) as well as flexibility of course content. For many courses, there is
more than one acceptable set of content and more than one acceptable
sequencing of content as well. Asynchronously delivered material in multimedia
format has the potential of providing a customized, possibly even unique,
educational experience to each student based on his or her educational goals,
background, and experience. Currently however, we would argue that no one
knows how to do this well.
The issue of cost is most problematic. As mentioned
above, there is an expectation that asynchronously delivered courses will be
less costly than synchronously delivered ones. To some extent this is a simple
pricing issue. However, if we frame the issue as the need for the production,
maintenance, and delivery costs of an asynchronous course to be less than that
of either a live or televised class, we can make some observations. Our
experience shows that the production and delivery costs of adequate quality
multimedia content are high. In a situation such as that at Stanford, where
classes are taught live and are also televised, asynchronous delivery is a
direct cost overlay. Although live classes will continue into the foreseeable
future, on-line synchronous delivery could supplant television should the
quality of the two methods become comparable.
To deliver high-quality educational material content
asynchronously, it is clear that reuse of material, tools to control content
production and maintenance costs, and economies of scale will be the key
determinants. These issues were beyond the scope of the present project.
Again, we would argue that currently no one really knows how to best manage
these determinants to hold down costs.
In closing, we note that there are now a great many
successful deployments of asynchronous education and training, including
entire asynchronous universities. The "technology deficit" which was
mentioned repeatedly by students and which we have explored at length as part
of this project, will work itself out over time. At this point, the most
urgent need for innovation in asynchronous learning lies in the area of
pedagogy and in the areas of large-scale content production, electronic
organization, and delivery.
At Stanford, it is our intention to continue to offer
asynchronous courses in the manner of this project. As was the case during the
project, the courses offered will probably range from two to four per quarter
(six to twelve per year). At the same time we hope to continue our
track-record as innovators by shifting our emphasis toward exploring methods
of increasing the quality of asynchronous education while at the same time
reducing its cost.
I
notice that David Noble does not devote much attention to successful (and highly
profitable) online programs such as Stanford's
ADEPT and Duke's
online Global MBA programs. That plus Noble's bad spelling and sloppy
grammar make me wonder how carefully crafted his "research" stands up
to rigorous standards for due care and freedom from bias. He does,
however, raise some points worth noting. Links to his defiance of distance
education at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm#DavidNoble
There
are other legitimate concerns. See http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
The
Clipper Project at Lehigh University is
aimed at learning assessment (named after the Pan Am Clipper that "did more
than herald a historic shift in the way goods and people were transported.
Indeed, it forced new ways of thinking about how we work and live. The expansion
of inexpensive air travel brought about a societal transformation."
"Sink
or Swim? Higher Education Online: How Do We Know What works --- And
What Doesn't?" by Gregory Farrington and Stephen Bronack, T.H.E. Journal,
May 2001, pp. 70-76 --- http://www.thejournal.com/magazine/vault/A3484.cfm
Last spring, the
chairman of the House of Representatives science subcommittee on basic
research expressed concern about the quality of online college courses. He
suggested that students who take courses online may not interact as much as
their peers in traditional courses, and that they may walk away with knowledge
but not with an understanding of how to think for themselves.
At a hearing designed
to gauge how the federal government should respond to this trend, the former
president of the University of Michigan, a distinguished MIT professor, and
other experts touted several online advantages. Among their assertions were
claims that student participation is higher in online courses, and that
students have easier access to professors through e-mail.
The committee
chairman remained skeptical and said he believed the National Science
Foundation should help assess the quality of online education by improving the
understanding of how the brain works and by figuring out how humans learn.
Well, learning how the brain works is no simple proposition. While we wait for
that day to come, there are a lot of insightful educational experiments that
can be done to sort out the reality from the sizzle of online education. At
Lehigh, we are spending a great deal of time these days doing just that. While
arguments can be made both for and against online classes, few are backed by
empirical research focused on actual teaching and learning behaviors. We agree
strongly with the chairman’s call for high quality educational research.
Millions of dollars
are spent each year on the development and delivery of online courses. Much of
this funding comes from federal agencies like the Department of Education and
the National Science Foundation, and a majority of the supported programs are
indeed creating interesting, engaging courses. But how do we know they really
work?
At best, one may find
anecdotal accounts of successful online classes. Professors claim, “I did it
in my class and it worked great!” or “the students noted on the
end-of-course survey that they enjoyed the course; therefore it is good.”
Occasionally, one may find reports that draw upon commonly shared theories,
such as “having control over more of one’s own learning should produce
better learners,” as proof of effectiveness. Such insights are valuable, but
they don’t provide the kind of understanding needed to make truly informed
decisions about the value of online education.
Jim DiPerna
(co-director of The Clipper Project) and Rob Volpe conducted a review of
research that produced nearly 250 potential articles concerning the evaluation
of Web-based instruction over the past 10 years. However, after eliminating
duplicate citations and irrelevant articles (i.e., articles merely describing
a Web-based course, articles offering guidelines for designing a Web-based
course, or articles explaining a particular Web-based technology), only a
dozen articles existed. Of the 12, 11 were based solely on students’
self-reported attitudes or perceptions regarding Web-based instruction.
Amazingly, only one directly assessed the impact of Web-based technology on
student learning (as measured by randomly selected essay performance and
letter grades) across subjects. DiPerna and Volpe presented a thorough review
of their research at APA last August.
As more learning
becomes digitized, we must analyze how socialization factors like
communication skills and interaction with other students are best fostered. We
must know which factors influence success. We must find out how technology
affects the way faculty members teach and the way students learn, as well as
how much it’s really going to cost to create and deliver this new form of
education. The only way we can truly know these things is through observing
the behaviors of students participating in digital learning.
At our university, we
have just begun a multi-year initiative to investigate the short- and
long-term effects of online classes. Aptly titled “The Clipper Project,”
the initiative will provide a baseline for future research into the impact of
Web-based courses on students and faculty.
For
the rest of the article, go to http://www.thejournal.com/magazine/vault/A3484.cfm
The
main page of The Clipper Project is at http://clipper.lehigh.edu/
The
Clipper Project is
a research and development initiative – investigating the costs and benefits
of offering Web-based University courses to high school seniors who
participate in the project. High school students who are accepted in Lehigh
University's early admissions program will be eligible to enroll in a
Web-based version of one of Lehigh University’s introductory-level courses.
Currently, Economics
I and Calculus
I are available through the Clipper Project. To learn more about each
course, visit the Courses
section.
Interested in the Clipper Project?
Please visit the sections of the Clipper Project website that interest you. If
you have any questions please view our Frequently Asked Questions (FAQ's)
sections, links to these sections are to the right. If you don’t find what
you need, drop us an e-mail, and we’ll
be happy to answer any questions you may have!
Accreditation Article
in my February 9, 2001 Edition of New Bookmarks --- http://faculty.trinity.edu/rjensen/book01q1.htm#020901
"Regional Accrediting Commissions: The Watchdogs of Quality
Assurance in Distance Education," by Charles Cook, Syllabus, February
2001, beginning on p. 20 and p. 56. I think the article will one day be posted at http://www.syllabus.com/
"So, what's new?"
It's a question we are often asked as a kind of verbal handshake. As the
executive officer of a regional accrediting commission, these days I respond,
"What isn't new?"
My rejoinder is suggestive of
how technology-driven change has affected American higher education. We
now have e-learning, largely asynchronous instruction provided
anytime/anywhere, expanding its reach. Faculty roles have become
unbundled and instructional programs disaggregated. The campus portal is
no longer made of stone or wrought iron, and through it students have access
to virtual textbooks, laboratories, classrooms, and libraries, as well as an
array of services, available 24 hours a day, seven days a week; indeed we now
have wholly virtual universities. Technology has made our institutions
of higher learning, once like islands, increasingly dependent on external
entities if they are to be effective. Once pridefully autonomous, they
now seek affiliations with organizations both within and without the academy
to jointly offer programming online.
These new phenomena, unheard of
five years ago, challenge the capacity of regional accreditation commissions
to provide meaningful quality assurance of instructional programs offered by
colleges and universities. Simply put, many of the structures and
conditions that led to accreditation's established assumptions about quality
do not hold up in the virtual education environment. The core challenge,
of course, is to deal with new forms of delivery for instruction, resources,
and services. But beyond that, as with so many things, the Net has
provided unprecedented opportunities for colleges and students alike to
package parts of or all an educational experience in new ways previously
beyond contemplation. Given circumstances, it's reasonable to ask,
"How is accreditation responding?"
Balancing Accountability and
Innovation
The eight regional commissions that
provide quality assurance for the majority of degree-granting institutions in
the United States are effectively taking action collectively and individually
to address the new forms of education delivery. Working within their
existing criteria and processes, they are seeking at once to maintain and
apply high standards while also recognizing that education can be provided
effectively in a variety of ways. However, regardless of the form of
education delivery in use in higher education, the commissions are resolved to
sustain certain values in accrediting colleges and universities:
-
Education is best
experienced within a community of learning where competent professionals
are actively and cooperatively involved in creating, providing, and
improving the instructional program
-
Learning should be dynamic
and interactive, regardless of the setting in which it occurs
-
Instructional programs
leading to degrees having integrity are organized around substantive and
coherent curricula that define expected learning outcomes
-
Institutions accept the
obligation to address student needs related to, and to provide the
resources necessary for, their academic success
-
Institutions are
responsible for the education provided in their name
-
Institutions undertake the
assessment and improvement of their quality, giving particular emphasis to
student learning
-
Institutions voluntarily
subject themselves to external oversight
Revenue and Accreditation Hurdles
Facing Corporate Universities
One thing that just does not seem to
work is a university commenced by a major publishing house. McGraw-Hill
World University was virtually stillborn at the date of birth as a
degree-granting institution. It evolved into McGraw-Hill Online Learning (
http://www.mhonlinelearning.com/
) that does offer some interactive training materials, but the original concept
of an online university ( having distance education courses for college credit) is dead and buried. Powerful companies like
Microsoft Corporation started up and then abandoned going it alone in
establishing new online universities.
The last venturesome publishing company to
start a university and fight to get it accredited is now giving up on the idea
of having its own virtual university --- http://www.harcourthighered.com/index.html
Harcourt Higher
Education University was purchased by a huge publishing conglomerate called Thompson Learning See http://www.thomsonlearning.com/harcourt/ .
Thomson had high hopes, but soon faced the reality that it is probably
impossible to compete with established universities in training and education
markets.
The Thomson Corporation
has announced that it will not continue to operate Harcourt Higher Education: An
Online College as an independent degree-granting institution. Harcourt Higher
Education will close on August 27, 2001. The closing is the result of a change
of ownership, which occurred on July 13, 2001, when the Thomson Corporation
purchased the online college from Harcourt General, Inc.
From Syllabus e-News on August
7, 2001
Online College to Close Doors
Harcourt Higher Education, which
launched an online for-profit college in Massachusetts last year, is closing
the school's virtual doors Sept. 28. Remaining students will have their
credentials reviewed by the U.S. Open University, the American affiliate of
the Open University in England.
We can only speculate as to the complex
reasons why publishing companies start up degree-granting virtual universities
and subsequently abandon efforts provide credit courses and degrees
online.
Enormous Revenue Shortfall (Forecast of 20,000 students in the first
year; Reality turned up 20 students)
"E-COLLEGES FLUNK OUT," By:
Elisabeth Goodridge, Information Week, August 6, 2001, Page 10
College
students appear to prefer classroom instruction over online offerings.
Print
and online media company Thomson Corp. said last week it plans to close its
recently acquired, for-profit online university, Harcourt Higher Education.
Harcourt opened with much fanfare a year ago, projecting 20,000 enrollees
within five years, but only 20 to 30 students have been attending.
Facing
problems from accreditation to funding, online universities have been
struggling mightily--in stark contrast to the success of the overall
E-learning market. A possible solution? E-learning expert Elliott
Masie predicts "more and more creative partnerships between traditional
universities and online ones."
Roosters Guarding the Hen House
Publishing houses failed to gain accreditations. I suspect that major reason is that the AACSB and other
accrediting bodies have made it virtually impossible for corporations to obtain
accreditation for startup learning corporations that are not partnered with
established colleges and universities. In the U.S., a handful of
corporations have received regional accreditation (e.g., The University of
Phoenix and Jones International Corporation), but these were established and had
a history of granting degrees prior to seeking accreditation. In business
higher education, business corporations face a nearly impossible hurdle of
achieving business school accreditation ( see http://businessmajors.about.com/library/weekly/aa050499.htm
) since respected accrediting bodies are totally controlled by the present
educational institutions (usually established business school deans who behave
like roosters guarding the hen house).
Special accrediting bodies for online programs have sprung up, but these have
not achieved sufficient prestige vis-à-vis established accrediting
bodies.
Note the links to accreditation
issues at http://www.degree.net/guides/accreditation.html
)
Where GAAP means Generally Accepted Accreditation Principles)
All
About Accreditation: A brief overview of what you really
need to know about accreditation, including GAAP (Generally Accepted
Accrediting Practices). Yes, there really are fake accrediting agencies, and
yes some disreputable schools do lie. This simple set of rules tells how to
sort out truth from fiction. (The acronym is, of course, borrowed from the
field of accounting. GAAP standards are the highest to which accountants can
be held, and we feel that accreditation should be viewed as equally serious.)
GAAP-Approved
Accrediting Agencies: A listing of all recognized
accrediting agencies, national, regional, and professional, with links that
will allow you to check out schools.
Agencies
Not Recognized Under GAAP: A list of agencies that have
been claimed as accreditors by a number of schools, some totally phony, some
well-intentioned but not recognized.
FAQs:
Some simple questions and answers about accreditation and, especially,
unaccredited schools.
For more details on accreditation and assessment, see http://faculty.trinity.edu/rjensen/assess.htm
Question:
Is lack of accreditation the main reason why corporate universities such as
McGraw-Hill World University, Harcourt Higher Education University, Microsoft
University, and other corporations have failed in their attempts to compete with
established universities?
Bob Jensen's Answer:
Although the minimum accreditation (necessary for transferring of credits to
other colleges) is a very important cause of failure in the first few years of attempting to attract online
students, it is not the main cause of failure. Many (most) of the courses
available online were training courses for which college credit transfer is not
an issue.
- Why did the University of Wisconsin (U of W) swell with over 100,000
registered online students while Harcourt Higher Education University (HHWU)
struggled to get 20 registered?
Let me begin to answer my own question with two questions. If you want
to take an online training or education course from your house in
Wisconsin's town of Appleton, would you prefer to pay more much more for the
course from HHWU than a low-priced tuition for Wisconsin residents at the U of
W. If you were a resident of Algona, Iowa and the price was the same
for the course whether you registered at HHWU or U of W, would you choose U
of W? My guess is that in both cases, students would choose U of W,
because the University of Wisconsin has a long-term tradition for quality
and is likely to be more easily recognized for quality on the students'
transcripts.
- Why can the University of Wisconsin offer a much larger curriculum than
corporate universities?
The University of Wisconsin had a huge infrastructure for distance education
long before the age of the Internet. Televised distance education
across the state has been in place for over 30 years. Extension
courses have been given around the entire State of Wisconsin for many
decades. The University of Wisconsin's information technology system
is already in place at a cost of millions upon millions of dollars.
There are tremendous economies of scale for the University of Wisconsin to
offer a huge online curriculum for training and education vis-à-vis a
startup corporate university starting from virtually scratch.
- What target market feels more closely attached to the University of
Wisconsin than some startup corporate university?
The answer is obvious. It's the enormous market comprised of alumni and
families of alumni from every college and university in the University of
Wisconsin system of state-supported schools.
- What if a famous business firm such as Microsoft Corporation or
Accenture (formerly Andersen Consulting) elected to offer a prestigious
combination of executive training and education to only upper-level
management in major international corporations? What are the problems
in targeting to business executives?
This target market is already carved out by alumni of elite schools such as
Stanford, Harvard, Chicago, Carnegie-Mellon, Columbia, London School of
Economics, Duke, University of Michigan, University of Texas, and the other
universities repeatedly ranked among the top 50 business schools in the
nation. Business executives are more often than not snobs when it
comes to universities in the peer set of "their" alma
maters. Logos of top universities are worth billions in the rising
executive onsite and online training and education market. UNext
Corporation recognized this, and this is the reason why the its first major
step in developing an online executive education program was to partner with
five of the leading business schools in the world.
- Why does one corporate university, The
University of Phoenix, prosper when others fail or limp along with costs
exceeding revenues?
The University of Phoenix is the world's largest private university. The reason for its success is largely due to a
tradition of quality since 1976. This does not mean that quality has
always been high for every course over decades of operation, but each year
this school seems to grow and offer better and better courses. Since
most of its revenues still come from onsite courses, it is not clear that
the school would prosper if it became solely an online university. The
school is probably further along on the learning curve than most other
schools in terms of adult learners. It offers a large number of very
dedicated and experienced full-time and part-time faculty. It
understands the importance of small classes and close communications between
students and other students and instructors. It seems to fill a niche
that traditional colleges and universities have overlooked.
You can read more about these happenings at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Especially note the prestigious universities going online at http://faculty.trinity.edu/rjensen/crossborder.htm
EVALUATION OF LEARNING TECHNOLOGY --- http://ifets.ieee.org/periodical/vol_4_2000/v_4_2000.html
"An introduction to the Evaluation
of Learning Technology"
http://ifets.ieee.org/periodical/vol_4_2000/intro.html
Martin Oliver
Higher Education Research and Development Unit University College London, 1-19
Torrington Place London, WC1E 6BT, England Tel: +44 20 7679 1905 martin.oliver@ucl.ac.uk
Evaluation can
be characterised as the process by which people make judgements about value
and worth; however, in the context of learning technology, this judgement
process is complex and often controversial. This article provides a context
for analysing these complexities by summarising important debates from the
wider evaluation community. These are then related to the context of learning
technology, resulting in the identification of a range of specific issues.
These include the paradigm debate, the move from expert-based to
practitioner-based evaluation, attempts to provide tools to support
practitioner-led evaluation, authenticity, the problem of defining and
measuring costs, the role of checklists, the influence of the quality agenda
on evaluation and the way in which the process of evaluation is itself
affected by the use of learning technology. Finally, these issues are drawn
together in order to produce an agenda for further research in this area.
"Mapping the Territory: issues in
evaluating large-scale learning technology initiatives"
http://ifets.ieee.org/periodical/vol_4_2000/anderson.html
Charles Anderson, Kate Day, Jeff Haywood, Ray Land and Hamish Macleod
Department of Higher and Further Education University of Edinburgh,
Paterson's Land Holyrood Road, Edinburgh EH8 8AQ
This article details
the challenges that the authors faced in designing and carrying out two recent
large-scale evaluations of programmes designed to foster the use of ICT in UK
higher education. Key concerns that have been identified within the evaluation
literature are considered and an account is given of how these concerns were
addressed within the two studies. A detailed examination is provided of the
general evaluative strategies of employing a multi-disciplinary team and a
multi-method research design and of how the research team went about: tapping
into a range of sources of information, gaining different perspectives on
innovation, tailoring enquiry to match vantage points, securing representative
ranges of opinion, coping with changes over time, setting developments in
context and dealing with audience requirements. Strengths and limitations of
the general approach and the particular tactics that were used to meet the
specific challenges posed within these two evaluation projects are identified.
"Peering Through a Glass Darkly:
Integrative evaluation of an on-line course"
http://ifets.ieee.org/periodical/vol_4_2000/taylor.html
Josie Taylor (There are also other authors listed for this article)
Senior Lecturer, Institute of Educational Technology
The Open University, Walton Hall
Milton Keynes MK7 6AA United Kingdom
j.taylor@open.ac.uk
Tel: +44 1908 655965
In this study we
describe a wide-spectrum approach to the integrative evaluation of an
innovative introductory course in computing. Since both the syllabus, designed
in consultation with industry, and the method of presentation of study
materials are new, the course requires close scrutiny. It is presented in the
distance mode to a class of around 5,000 students and uses a full range of
media: paper, broadcast television, interactive CD-ROM, a Web-oriented
programming environment, a Web site and computer conferencing. The evaluation
began with developmental testing whilst the course was in production, and then
used web-based and paper-based questionnaires once the course was running.
Other sources of data, in the form of observation of computing conferences and
an instrumented version of the Smalltalk programming environment, also provide
insight into students’ views and behaviour. This paper discusses the ways in
which the evaluation study was conducted and lessons we learnt in the process
of integrating all the information at our disposal to satisfy a number of
stakeholders.
"An evaluation model for
supporting higher education lecturers in the integration of new learning
technologies"
http://ifets.ieee.org/periodical/vol_4_2000/joyes.html
Gordon Joyes
Teaching Enhancement Advisor and Lecturer in Education School of Education
University of Nottingham Jubilee Campus, Wollaton Road Nottingham, NG8 1BB
United Kingdom Gordon.Joyes@nottingham.ac.uk
Tel: +44 115 9664172 Fax: +44 115 9791506
This paper provides a
description and some reflections on the ongoing development and use of an
evaluation model. This model was designed to support the integration of new
learning technologies into courses in higher education. The work was part of
the Higher Education Funding Council for England (HEFCE) funded Teaching and
Learning Technology Programme (TLTP). The context and the rationale for the
development of the evaluation model is described with reference to a case
study of the evaluation of the use of new learning technologies in the civil
and structural engineering department in one UK university. Evidence of the
success of the approach to evaluation is presented and the learning media grid
that arose from the evaluation is discussed. A description of the future use
of this tool within a participatory approach to developing learning and
teaching materials that seeks to embed new learning technologies is presented.
"A multi-institutional evaluation
of Intelligent Tutoring Tools in Numeric Disciplines"
http://ifets.ieee.org/periodical/vol_4_2000/kinshuk.html
Kinshuk (there are other authors listed for this article)
Information Systems Department
Massey University, Private Bag 11-222 Palmerston North, New Zealand Tel: +64 6
350 5799 Ext 2090 Fax: +64 6 350 5725 kinshuk@massey.ac.nz
This paper
presents a case study of evaluating intelligent tutoring modules for
procedural knowledge acquisition in numeric disciplines. As Iqbal et al.
(1999) have noted, the benefit of carrying out evaluation of Intelligent
Tutoring Systems (ITS) is to focus the attention away from short-term delivery
and open up a dialogue about issues of appropriateness, usability and quality
in system design. The paper also mentions an independent evaluation and how
its findings emphasise the need to capture longer-term retention.
"Avoiding holes in holistic
evaluation"
http://ifets.ieee.org/periodical/vol_4_2000/shaw.html
Malcolm Shaw
Academic Development Manager The Academic Registry, Room F101 Leeds Metropolitan
University Calverley Street, Leeds, LS1 3HE, UK m.shaw@lmu.ac.uk Tel: +44 113
283 3444 Fax: +44 113 283 3128
Suzanne Corazzi
Course Leader, Cert. in English with Prof. Studies Centre for Language
Studies,
Jean Monnet Building Room G01 Leeds Metropolitan University Beckett Park, Leeds,
LS6 3QS, UK s.corazzi@lmu.ac.uk
Tel: +44 113 283 7440 Fax: +44 113 274 5966
The paper describes
the evaluation strategy adopted for a major Teaching and Learning Technology
Programme (TLTP3) funded project involving Leeds Metropolitan University (LMU),
Sheffield Hallam Univeristy (SHU) and Plymouth University. The project
concerned the technology transfer of a web-based learning resource that
supports the acquisition of Key Skills from one of the Universities (LMU) to
the others, and its customisation for these new learning environments.
The principles that
guided the development of the evaluation strategy are outlined and the details
of the methods employed are given. The practical ways in which this large
project approached the organisation and management of the complexities of the
evaluation are discussed. Where appropriate, examples of the sort of
procedures and tools used are also provided.
Our overarching
aim in regard to evaluation was to take a thorough and coherent approach that
was holistic and that fully explored all the main aspects in the project
outcomes. The paper identifies the major issues and problems that we
encountered and the conclusions that we have reached about the value of our
approach in a way that suggests its potential usefulness to others operating
in similar circumstances.
"Classroom Conundrums: The Use of
a Participant Design Methodology"
http://ifets.ieee.org/periodical/vol_4_2000/cooper.html
Bridget Cooper and Paul Brna
Computer Based Learning Unit, Leeds University Leeds LS2 9JT, England, UK Tel:
+44 113 233 4637 Fax: +44 113 233 4635 bridget@cbl.leeds.ac.uk paul@cbl.leeds.ac.uk
We discuss the use of
a participant design methodology in evaluating classroom activities in the
context of an ongoing European funded project NIMIS, (Networked Interactive
Media in Schools). We describe the thinking behind the project and choice of
methodology, including a description of the pedagogical claims method utilised,
the way in which it was carried out and some of the interim results and the
issues raised in the process.
Though the project is
situated in three European schools, we concentrate here on the evaluation in
one UK school in particular: Glusburn County Primary school, near Leeds. The
classroom has been very well received by teachers and pupils and the
preliminary evaluation suggests some beneficial effects for both teachers and
pupils, as well as long term consequences from the participant design
methodology for some of the participants.
"Evaluating information and
communication technologies for learning"
http://ifets.ieee.org/periodical/vol_4_2000/scanlon.html
Eileen Scanlon, Ann Jones, Jane Barnard, Julie Thompson and Judith
Calder
Institute for Educational Technology The Open University, Milton Keynes MK7 6AA
United Kingdom e.scanlon@open.ac.uk
Tel: +44 1908 274066
In this paper we will
describe an approach to evaluating learning technology which we have developed
over the last twenty-five years, outline its theoretical background and
compare it with other evaluation frameworks. This has given us a set of
working principles from evaluations we have conducted at the Open University
and from the literature, which we apply to the conduct of evaluations. These
working practices are summarised in the context interactions and outcomes
(CIAO!) model. We describe here how we applied these principles, working
practices and models to an evaluation project conducted in Further Education.
We conclude by discussing the implications of these experiences for the future
conduct of evaluations.
"A Large-scale ‘local’
evaluation of students’ learning experiences using virtual learning
environments"
http://ifets.ieee.org/periodical/vol_4_2000/richardson.html
Julie Ann Richardson
3rd Floor, Weston Education Centre Guys, King’s & St. Thomas’ Hospital
Cutcombe Rd., London, SE5 9RJ United Kingdom julie.richardson@kcl.ac.uk
Tel: +44 207 848 5718 Fax: +44 207 848 5686
Anthony Turner
Canterbury Christ Church University
College North Holmes Rd., Canterbury, CT1 1QU United Kingdom a.e.turner@cant.ac.uk
Tel: +44 1227 782880
In 1997-8
Staffordshire University introduced two Virtual Learning Environments (VLEs),
Lotus Learning Space, and COSE (Creation of Study Environments), as part of
its commitment to distributed learning. A wide-reaching evaluation model has
been designed, aimed at appraising the quality of students’ learning
experiences using these VLEs. The evaluation can be considered to be a hybrid
system with formative, summative and illuminative elements. The backbone of
the model is a number of measuring instruments that were fitted around the
educational process beginning in Jan 1999.
This paper provides
an overview of the model and its implementation. First, the model and
evaluation instruments are described. Second, the method and key findings are
discussed. These highlighted that students need to feel more supported in
their learning, that they need more cognitive challenges to encourage
higher-order thinking and that they prefer to download their materials to hard
copy. In addition, tutors need to have a greater awareness of the ways
individual differences influence the learning experience and of strategies to
facilitate electronic discussions. Generally, there should be a balance
between learning on-line and face-to-face learning depending on the experience
of tutors, students, and the subject.
Finally the model is
evaluated in light of the processes and findings from the study.
"Towards a New Cost-Aware
Evaluation Framework"
http://ifets.ieee.org/periodical/vol_4_2000/ash.html
Charlotte Ash
School of Computing and Management Sciences Sheffield Hallam University Stoddart
Building, Howard Street Sheffield, S1 1WB, United Kingdom Tel: +44 114 225 4969
Fax: +44 114 225 5178 c.e.ash@shu.ac.uk
This paper proposes a new approach to
evaluating the cost-effectiveness of learning technologies within UK higher
education. It identifies why we, as a sector, are so unwilling to base our
decisions on results of other studies and how these problems can be overcome
using a rigorous, quality-assured framework which encompasses a number of
evaluation strategies. This paper also proposes a system of cost-aware
university operation, including integrated evaluation, attainable through the
introduction of Activity-Based Costing. It
concludes that an appropriate measure of cost-effectiveness is essential as
the sector increasingly adopts learning technologies.
"W3LS: Evaluation framework for
World Wide Web learning"
http://ifets.ieee.org/periodical/vol_4_2000/veen.html
Jan van der Veen (There are other authors of this article)
DINKEL Educational Centre University of Twente p.o.box 217, 7500AE Enschede The
Netherlands Tel: +31 53 4893273 Fax: +31 53 4893183 j.t.vanderveen@dinkel.utwente.nl
An evaluation
framework for World Wide Web learning environments has been developed. The
W3LS (WWW Learning Support) evaluation framework presented in this article is
meant to support the evaluation of the actual use of Web learning
environments. It indicates how the evaluation can be set up using
questionnaires and interviews among other methods. The major evaluation
aspects and relevant 'stakeholders' are identified. First results of cases
using the W3LS evaluation framework are reported from different Higher
Education institutes in the Netherlands. The usability of the framework is
evaluated, and future developments in the evaluation of Web learning in Higher
Education in the Netherlands are discussed.
Once again, the main website is at http://ifets.ieee.org/periodical/vol_4_2000/v_4_2000.html
E-Learner Competencies, by P.Daniel
Birch, Learning Circuits --- http://www.learningcircuits.org/2002/jul2002/birch.html
Training managers
and online courseware designers agree that e-learning isn't appropriate for
every topic. But e-learning also may not be the right fit for all types of
learners. Here are some of the behaviors of a successful e-learner. Do you
have them?
Much has been said
about the impact e-learning has on content developers, trainers, and training
managers. When the conversation turns to learners, attention focuses on the
benefits of less travel and fewer hours spent away from jobs. However, those
issues don't create an entire picture of how e-learning affects participants.
The industry needs to
take a closer look at how learning behaviors might adapt in an online
environment. In other words, how do the skills that serve learners well in a
classroom or during on-the-job learning translate to self-paced and virtual
collaboration learning experiences? Do learners need new competencies? Will an
organization find that some of its employees have e-learning disabilities?
In general, three
major factors influence an e-learner's success:
- management of the
learning environment (self-directive competencies)
- interaction with
the learning content (metacognitive competencies)
- interaction with
virtual learning facilitators and classmates (collaboration competencies
Continued at
http://www.learningcircuits.org/2002/jul2002/birch.html
The
Criterion Problem
Question
Why is it that some of the smartest accounting graduates, including top scoring
CPA Gold Medal winners, do not make the partnership cut vis-a-vis some lower IQ
accounting staff in large CPA firms?
"Earnings
effects of personality, education and IQ for the gifted," by Steve Hsu,
MIT's Technology Review, April 2, 2011 ---
http://www.technologyreview.com/blog/post.aspx?bid=354&bpid=26600&nlid=4315
Thanks to a reader for pointing me to this recent paper by Heckman and
collaborators, which makes use of data from the
Terman study of
gifted individuals (minimum IQ of 135 on the Stanford-Binet).
Of the personality factors, Conscientiousness and Extraversion had the largest
(positive) effect on lifetime earnings -- see figures below. See
here for more
on Big 5 personality factors and a link to a personality test.
The Effects of Education, Personality, and IQ on
Earnings of High-Ability Men
This paper estimates the internal rate of return (IRR)
to education for men and women of the Terman sample, a 70-year long prospective
cohort study of high-ability individuals. The Terman data is unique in that it
not only provides full working-life earnings histories of the participants, but
it also includes detailed profiles of each subject, including IQ and measures of
latent personality traits. Having information on latent personality traits is
significant as it allows us to measure the importance of personality on
educational attainment and lifetime earnings.
Our analysis addresses two problems of the literature on returns to education:
First, we establish causality of the treatment effect of education on earnings
by implementing generalized matching on a full set of observable individual
characteristics and unobserved personality traits. Second, since we observe
lifetime earnings data, our estimates of the IRR are direct and do not depend on
the assumptions that are usually made in order to justify the interpretation of
regression coefficients as rates of return.
For the males, the returns to education beyond high school are sizeable. For
example, the IRR for obtaining a bachelor's degree over a high school diploma is
11.1%, and for a doctoral degree over a bachelor's degree it is 6.7%. These
results are unique because they highlight the returns to high-ability and
high-education individuals, who are not well-represented in regular data sets.
Our results highlight the importance of personality and intelligence on our
outcome variables. We find that personality traits similar to the Big Five
personality traits are significant factors that help determine educational
attainment and lifetime earnings. Even holding the level of education constant,
measures of personality traits have significant effects on earnings. Similarly,
IQ is rewarded in the labor market, independently of education. Most of the
effect of personality and IQ on life-time earnings arise late in life, during
the prime working years. Therefore, estimates from samples with shorter
durations underestimate the treatment effects.
Here are a couple of interesting excerpts from the
paper:
... Our third contribution is to show how the effect of personality on earnings
varies through-out the men’s working lives. We find that without access to long
follow-up data, the estimated effect would be understated. Note that even though
the Terman sample has a restricted range of IQ, there is substantial variation
in personality. In fact, the Terman men do not differ from the general
population in terms of personality.
... note that even when controlling for rich background variables, IQ maintains
a statistically significant effect on lifetime earnings. Even though the effect
is slightly diminished from the un-controlled association of the first column,
it is still sizable. Malcolm Gladwell claims rather generally in his book
Outliers that for the Terman men, IQ did not matter once family background and
other observable personal characteristics were taken into account. While we do
not want to argue that IQ has a larger role for the difference between 50 and
100, for example, than for the difference between 150 and 200, we do want to
point out that even at the high end of the ability distribution, IQ has
meaningful consequences. [The syntax of this last sentence is strange.
Presumably the impact of IQ variation from 50 to 100 (from severely handicapped
to average) is larger than for 150 to 200, even though their results show a
significant effect even in the very high range.]
Below are some nice figures (click for larger
versions). Note the personality factor distribution among Termites was similar
to that of the overall population, whereas the IQ range was restricted due to
selection. Typical lifetime earnings for this group of exceptionally able men
ranged from $2 to $3 million in 2008 dollars.
Continued
in article
April 5, 2011 reply from
Jim Fuehrmeyer
In my start group in Chicago over thirty years ago,
there were six of us who were Sells award winners including the gold
medalist. The gold medalist could take a test with the best of them but she
couldn’t carry on an intelligent conversation and she lacked the ability to
make judgments. Like I tell my students, accounting is not about solving
problems; it’s about identifying the problem to be solved. I’m sure some of
the smartest PhDs you’ve known in your career ended up being the poorest
teachers; being smart doesn’t mean you can communicate your knowledge to
others effectively.
Jim
Jensen Comment
Research that equates income levels to predictor factors suffers inevitably to
the "criterion problem" discussed below.
A new book says you need passion and perseverance
to achieve your goals in work and life. Is this a bold new idea or an old one
dressed up to be the latest self-help sensation?
"Is “Grit” Really the Key to Success?" by
Daniel Engber, Slate, May 2016 ---
http://www.slate.com/articles/health_and_science/cover_story/2016/05/angela_duckworth_says_grit_is_the_key_to_success_in_work_and_life_is_this.html#rt
Scientists have tried
to solve this puzzle for more than 50 years, writes Duckworth in her new
book
Grit: The Power of Passion
and Perseverance.
But even the school’s best means of screening its applicants—something
called the “whole candidate score,” a weighted mixture of a student’s SATs,
high school ranking, leadership ability, and physical fitness—does not
anticipate who will succeed and who will fail at Beast. So Duckworth
designed her own way of scoring candidates, giving each a survey that tested
his or her willingness to persevere in pursuit of long-term goals. She
called this measure “grit.” And guess what? Grit worked. The cadets’ survey
answers helped predict whether they would make it through the grueling
program. Duckworth’s best-seller peddles a pair of big ideas: that
grit—comprising a person’s perseverance and passion—is among the most
important predictors of success and that we all have the power to increase
our inner grit. These two theses, she argues, apply not just to cadets but
to kids in troubled elementary schools and undergrads at top-ranked
universities and to scientists, artists, and entrepreneurs. Duckworth’s book
describes a wide array of “paragons of grit,” people she’s either
interviewed or studied from afar: puzzlemasters and magicians, actors and
inventors, children and adults, Steve Young and Julia Child. Grit appears in
all of them, sprinkled over their achievements like a magic Ajax powder. In
tandem with some feisty scrubbing, it dissolves whatever obstacles might
hold a person back.
While her book has only
just arrived, Duckworth’s gritty tales—and the endlessly extensible ideas
they represent—have already spread throughout the country, into
classrooms,
boardrooms,
and
locker rooms
alike. Popularized in a
viral TED talk from
2013 and validated by that year’s
MacArthur “genius” grant, they’ve been inscribed into
national education policy, and public school districts in California are
grading kids—as well as
schools themselves—on
grit. Duckworth’s message has been broadcast with such speed and
thoroughness that other people even started selling
books on grit
before she published her own.
With
Grit, Duckworth has now put out the
definitive handbook for her theory of success. It parades from one essential
topic to another on a float of common sense, tossing out scientific insights
as it goes along. How to raise your kids, how to unearth your inner passion,
how to find a higher purpose—like other self-help authors, Duckworth finds
authoritative answers to these questions, promising to change how we see the
world. And like other self-help authors, she pulls a sleight of hand by
which even widely held assumptions end up looking like discoveries. It’s as
important to work hard, the book contends, as it is to be a natural talent.
Who would disagree with that?
Continued in article
Jensen Comment
Years ago one of my psychology professors at Stanford who did a long-term
(funded by the US Navy) study of predictors of success and concluded that the
fundamental problem of such research was in defining and measuring "success."
He termed this "The Criterion Problem"
A message from Professor XXXXX
I recently
submitted an article on Assessment Outcomes for distance education (DE) to
"The Technology Source". The editor suggested that I include a
reference to profiling the successful DE student because he was sure some
research existed on the subject. Well I have been looking for it casually for
3 years in my reading and the 3-4 conferences per year that I attend, and
never have come across anything. Have spent the last week looking in InfoTrac
and reviewed close to 300 abstracts, without a single good lead. You are the
man. So hoping you can answer the question - is there any empirical research
on the question of profiling a successful DE student and in particular any
research where an institution actually has a hurdle for students to get into
DE based on a pedagogically sound questionnaire? Hoping you know the answer
and have time to respond.
Reply from Bob Jensen
Hi XXXXX,
I am reminded of a psychology
professor, Tom Harrell, that I had years ago at Stanford University. He
had a long-term contract from the U.S. Navy to study Stanford students when they
entered the MBA program and then follow them through their careers. The
overall purpose was to define predictors of success that could be used for
admission to the Stanford GSB (and extended to tests for admission into careers,
etc.) Dr, Harrell's research became hung up on "The Criterion
Problem (i.e., the problem of defining and measuring
"success.") You will have the same trouble whenever you try to
assess graduates of any education program whether it is onsite or online.
What is success? What is the role any predictor apart from a myriad of
confounded variables?
You might take a look at the following
reference:
Harrell, T.W. (1992). "Some history of the army general classifications
test," Journal of Applied Psychology, 77, 875-878.
Success is a relative term.
Grades not always good criteria for assessment. Perhaps a C student is the
greatest success story of a distance education program. Success may lie in
motivating a weak student to keep trying for the rest of life to learn as much
as is possible. Success may lie in motivating a genius to channel
creativity. Success may lie in scores on a qualification examination such
as the CPA examination. However, use of "scores" is very
misleading, because the impact of a course or entire college degree is
confounded by other predictors such as age, intellectual ability, motivation,
freedom to prepare for the examination, etc.
Success may lie in advancement in the
workforce, but promotion and opportunity are subject to widely varying and
often-changing barriers and opportunities. A program's best graduate may
end up on a dead end track, and its worst graduate may be a maggot who fell in a
manure pile. For example, it used to be virtually impossible for a woman
to become a partner in a large public accounting firm. Now the way is
paved with all sorts of incentives for women to hang in there and attain
partnership. Success also entails being at the right place at the right time,
and this is often a matter of luck as well as ability. George Bush
probably would never have had an opportunity to become one of this nation's best
leaders if there had not been a terrorist attack that afforded him such an
opportunity. Certainly this should not be termed "lucky," but it
is a rare "opportunity" to be a great "success."
Eileen Myles sent poems to The New Yorker for 30 years. Finally, one
was accepted. Payment: $600 and two nights at a motel ---
http://www.poetryfoundation.org/harriet/2016/04/times-ive-got-paid/
When it comes to special criteria for
acceptance in to distance education programs, there are some who feel that, due
to fairness, there should be no special criteria beyond the criteria for
acceptance into traditional programs. For example, see the Charles Stuart
University document at http://www.csu.edu.au/acadman/d13m.htm
You might find some helpful information
in the following reference --- http://202.167.121.158/ebooks/distedir/bestkudo.htm
Phillips, V., & Yager, C. The
best distance learning graduate schools: Earning your degree without leaving
home.
This book profiles 195 accredited institutions that offer graduate degrees via
distance learning. Topics include: graduate study, the quality and benefits of
distance education, admission procedures and criteria,
available education delivery systems, as well as accreditation, financial aid,
and school policies.
A review is given at http://distancelearn.about.com/library/weekly/aa022299.htm
More directly related to your question,
might be the self assessment suggestions at Excelsior College:
- Self-Directed Search
- Campbell Interest Survey
- Your Career Profile
- The Career Key
- Career Interest Checklist
- Transferable Skills Surveys
Another self assessment process is
provided by ISIM University at http://www.isimu.edu/foryou/begin/eprocess.htm
In self assessment processes it is
sometimes difficulty to determine whether the motivation is one of promotion of
the program as opposed to assessment for having students self-select whether to
apply or not to apply.
You might be able to contact California
State University at Fullerton to see if they will share some of their assessment
outcomes of online learning courses. A questionnaire that is used there is at http://de-online.fullerton.edu/de/assessment/assessment.asp
Some good assessment advice is given at
http://www.ala.org/acrl/paperhtm/d30.html
A rather neat PowerPoint show from
Brazil is provided at http://www.terena.nl/tnc2000/proceedings/1B/1b2.ppt
(Click on the slides to move forward.)
The following references are given
at
- Faculty
Course Evaluation Form
University of Bridgeport
- Web-Based
Course Evaluation Form
Nashville State Technology Institute
- Guide
to Evaluation for Distance Educators
University of Idaho Engineering Outreach Program
- Evaluation
in Distance Learning: Course Evaluation
World Bank Global Distance EducatioNet
A Code of Assessment Practice is given
at http://cwis.livjm.ac.uk/umf/vol5/ch1.htm
A comprehensive outcomes assessment
report (for the University of Colorado) is given at http://www.colorado.edu/pba/outcomes/
A Distance Learning Bibliography is
available at http://mason.gmu.edu/~montecin/disedbiblio.htm
Also see "Integration of
Information Resources into Distance Learning Programs" by Sharon M.
Edge and Denzil Edge at http://www.learninghouse.com/pubs_pubs02.htm
My bottom line conclusion is that I
probably did not help you with the specific help you requested. At best, I
provided you with some food for thought.
Lawyers Don't Like Being Ranked
It's a sunny day in Seattle when two lawyers can bring
a class action suit on their own behalf -- and then see it rejected on First
Amendment grounds. That's what happened last week in the Emerald City, when
Federal District Judge Robert S. Lasnik ruled that there was no basis for
cracking down on a lawyer-rating Web site merely because some of its ratees
didn't like how they were portrayed. The site, called Avvo, does for lawyers
what any number of magazines and Web sites have been doing for other professions
for years. Magazines regularly publish stories that rank an area's doctors and
dentists. There are rating sites and blogs for the "best" hairstylists,
manicurists, restaurants and movie theaters. Almost any consumer product or
service these days is sorted and ranked.
"Judging Lawyers," The Wall Street Journal, December 24, 2007; Page A10
---
http://online.wsj.com/article/SB119846335960848261.html
Avvo Lawyer Ratings ---
http://www.avvo.com/
Jensen Comment
In fairness most of these ranking systems are misleading. For example,
physicians and lawyers who lose more often may also be willing to take on the
tougher cases having low probabilities of success. Especially note
"Challenging Measures of Success" at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
And some professionals that win a lot may do so because they do so in
unethical ways. And lawyers, like physicians, have different specialties such
that in the realm of a particular specialty, maybe one that rarely call out,
from over 100 specialties, they may be outstanding.
Bob Jensen threads on college ranking controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings
"The Case Against College Education," by Ramesh Ponnuru, Time
Magazine, February 24, 2010 ---
http://www.time.com/time/nation/article/0,8599,1967580,00.html?xid=huffpo-direct
Thank you Ms. Huffington for the heads up.
Even in these days of partisan rancor, there is a
bipartisan consensus on the high value of postsecondary education. That more
people should go to college is usually taken as a given. In his State of the
Union address last month, President Obama echoed the words of countless high
school guidance counselors around the country: "In this economy, a high
school diploma no longer guarantees a good job." Virginia Governor Bob
McDonnell, who gave the Republican response, concurred: "All Americans agree
that a young person needs a world-class education to compete in the global
economy."
The statistics seem to bear him out. People with
college degrees make a lot more than people without them, and that
difference has been growing. But does that mean that we should help more
kids go to college — or that we should make it easier for people who didn't
go to college to make a living? (See the 10 best college presidents.)
---
http://www.time.com/time/specials/packages/article/0,28804,1937938_1937934,00.html
We may be close to maxing out on the first
strategy. Our high college drop-out rate — 40% of kids who enroll in college
don't get a degree within six years — may be a sign that we're trying to
push too many people who aren't suited for college to enroll. It has been
estimated that, in 2007, most people in their 20s who had college degrees
were not in jobs that required them: another sign that we are pushing kids
into college who will not get much out of it but debt.
The benefits of putting more people in college are
also oversold. Part of the college wage premium is an illusion. People who
go to college are, on average, smarter than people who don't. In an economy
that increasingly rewards intelligence, you'd expect college grads to pull
ahead of the pack even if their diplomas signified nothing but their smarts.
College must make many students more productive workers. But at least some
of the apparent value of a college degree, and maybe a lot of it, reflects
the fact that employers can use it as a rough measure of job applicants'
intelligence and willingness to work hard.
We could probably increase the number of high
school seniors who are ready to go to college — and likely to make it to
graduation — if we made the K-12 system more academically rigorous. But
let's face it: college isn't for everyone, especially if it takes the form
of four years of going to classes on a campus.
(See pictures of the college dorm's evolution.) ---
http://www.time.com/time/photogallery/0,29307,1838306_1759869,00.html
To talk about college this way may sound élitist.
It may even sound philistine, since the purpose of a liberal-arts education
is to produce well-rounded citizens rather than productive workers. But
perhaps it is more foolishly élitist to think that going to school until age
22 is necessary to being well-rounded, or to tell millions of kids that
their future depends on performing a task that only a minority of them can
actually accomplish.
The good news is that there have never been more
alternatives to the traditional college. Some of these will no doubt be
discussed by a panel of education experts on Feb. 26 at the National Press
Club, a debate that will be aired on PBS. Online learning is more flexible
and affordable than the brick-and-mortar model of higher education.
Certification tests could be developed so that in many occupations employers
could get more useful knowledge about a job applicant than whether he has a
degree. Career and technical education could be expanded at a fraction of
the cost of college subsidies. Occupational licensure rules could be relaxed
to create opportunities for people without formal education.
It is absurd that people have to get college
degrees to be considered for good jobs in hotel management or accounting —
or journalism. It is inefficient, both because it wastes a lot of money and
because it locks people who would have done good work out of some jobs. The
tight connection between college degrees and economic success may be a
nearly unquestioned part of our social order. Future generations may look
back and shudder at the cruelty of it.
Read more:
http://www.time.com/time/nation/article/0,8599,1967580,00.html?xid=huffpo-direct#ixzz0gYarvwQM
Time's Special Report on Paying for a College Education ---
http://www.time.com/time/specials/packages/0,28757,1838709,00.html
Jensen Comment
I think it is misleading to talk about the "value" of education in terms of the
discounted present value of a degree due to career advantages. Firstly,
education has many intangible values that cannot be measured such as being
inspired to really enjoy some of the dead or living poets.
Secondly, even if college graduates on average make a lot more money,
this is an illustration of how to lie with statistics. A major problem is in the
variance about the mean. Much depends upon where students graduate, what they
majored in for their first degree, whether or not they attended graduate school,
what they majored in in graduate school, where they got their graduate degree,
etc. Average incomes may also be skewed upward by kurtosis and the related
problem of bounds on the left tail of the distribution. Low income levels are
bounded whereas high income levels may explode toward the moon for bankers,
corporate executives, physician specialists, etc.
In any case telling every student to expect more than a million dollars just
for getting a bachelors degree is a big lie!
Onsite
Versus Online
Distance Education is Rapidly Gaining Acceptance
Free online courses (some for credit) ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Other online course and degree alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
New tools to prevent high tech cheating
http://online.qmags.com/TJL0813?sessionID=4CB36C8DBEEC3C846A1D7E17F&cid=2399838&eid=18342#pg1&mode1
See the article beginning on Page 213
Also see
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Ideas
Trust in God, but Tie Your Camel: Use Zoom Testing Online
https://jborden.com/2020/06/10/trust-in-god-but-tie-your-camel/
This is how a Villanova Professor uses Zoom Testing Online to discourage
cheating
"In a Fake Online
Class With Students Paid to Cheat, Could Professors Catch the Culprits?' by
Brad Wolverton, Chronicle of Higher Education, December 22, 2015 ---
http://chronicle.com/article/In-a-Fake-Online-Class-With/234687?cid=wc&utm_source=wc&utm_medium=en&elq=5e53f217c61144bcb8f7be3a76e61ae2&elqCampaignId=2123&elqaid=7325&elqat=1&elqTrackId=f7b3e292feda404c8db56c657c1c5e5f
Jensen Comment
The best prevention device is still a proctoring village vicar or an employee's
supervisor.
Bob Jensen's neglected threads on
prevention and detection of online cheating ---
http://faculty.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline
Respondus Monitor - online exams proctor ---
http://youtu.be/lGyc_HBchOw
Another approach that I recommend is to outsource proctoring to local K-12
teachers or the village vicar.
Chronicle of Higher Education: Students Cheat. How Much Does It Matter?
Click Here
. . .
Trust your students, the pedagogical
progressives advise, and they’ll usually live up to it. But that has not
been Ajay Shenoy’s experience. In March, Shenoy, an assistant professor of
economics at the University of California at Santa Cruz, relaxed the
expectations for his winter-quarter final, making it open note and giving
students more time.
That hadn’t been Shenoy’s first impulse.
Initially, he thought he might make it harder to cheat by letting students
view just one question at a time, and randomizing the order of questions.
The test would be timed, and everyone would take it at once.
Then his students started to go home, and home
was all over the world. Between time zones and air travel, there was no way
he could expect them to all find the same two hours for an exam. Besides, he
realized, his students were, understandably, incredibly stressed.
Still, Shenoy required students to do their own
work. He even asked them to let him know if they heard about anyone
cheating.
After the exam, a couple of students came
forward. One had heard about classmates putting test questions on Chegg.
Another was pretty sure his housemates had cheated off their fraternity
brothers. Alarmed, Shenoy decided to investigate. In his research, Shenoy
uses natural-language processing to detect signs of political corruption. So
to understand the scope of the cheating, he wrote a simple computer program
to compare students’ exam responses. He uncovered an amount of cheating he
calls “stunning.”
It also bothered Shenoy that it seemed to be
common knowledge among his students that a number of their classmates were
cheating.
“This is the issue when people say you should
just trust students more,” Shenoy says. “Even if 99 percent of the students
don’t want to cheat, if that 1 percent is cheating — and if everyone else
knows about it — it’s a prisoner’s dilemma, right?” Students who are honest
know they are at a disadvantage, he says, if they don’t think the professor
is going to enforce the rules.
So Shenoy enforced the rules. He investigated
20 cases in his class of 312, and filed academic-misconduct reports for 18.
(Those weren’t the only students who cheated, Shenoy says. Through
documentation he got from Chegg, he knows many more students turned to the
site. But he had time to pursue only students who had submitted questions to
it.)
In-person exam cheating, Shenoy thought, is
ineffective, and probably doesn’t boost students’ grades all that much —
certainly no more than, well, studying more.
But
when he compared the grades of students who had cheated with those of their
classmates who didn’t, he found that the cheaters scored about 10 points
higher on the exam. “I guess it’s possible that the smarter students
were also the ones who chose to cheat,” Shenoy says. “But usually, in my
experience, it’s the other way around.”
Who’s hurt when students cheat? It’s their
loss, some professors will argue. It’s the cheaters who’ve squandered their
tuition payment, time, and opportunity to learn the material. Besides, their
actions will probably catch up to them eventually. That’s not how Shenoy
views it, though.
If cheating leads to a higher grade, says the
economist, then cheating is rational. “This was actually quite valuable to
the student,” Shenoy says. “At the expense of the other students.”
So Shenoy felt a responsibility. “Part of my reason
for putting so much time into pursuing this,” he says, “was just out of a
sense of justice for the other students.”
Continued in article
Jensen Comment
I continued to repeat my example of the 60+
students who were expelled for cheating in a political science class where every
student was assured of getting an A grade in the course if they did the
homework. Many reported they cheated (in this case plagiarized) because when
they were assured of an A grade irrespective of effort then their time was
better spent on courses where they were not assured of an A grade.
When some of students took my courses on a
pass-fail basis seldom was their performance on homework, term papers, and exams
nearly as good as most of my students taking the course for a letter grade. The
pass-fail students seemingly did not put the time and effort into learning as
the students who worked overtime for an A or B grade
Chronicle of Higher
Education: Seven Ways to Assess Students Online and Minimize Cheating
---
Click Here
Break
up a big high-stakes exam into small weekly tests.
Start
and end each test with an honor statement
Ask
students to explain their problem-solving process
Get
to know each student’s writing style in low- or no-stakes tasks
Assess learning in online discussion forums
Don’t
base grades solely on tests
Offer
students choice in how they demonstrate their knowledge.
Jensen Comment
If you base grades almost entirely upon examinations, make students take those
examinations in some type of testing center or have the exams proctored locally.
How do you stay awake watching paint dry?
"Behind the Webcam's Watchful Eye, Online Proctoring Takes Hold," by
Steve Kolowich, Chronicle of Higher Education, April 15, 2013 ---
http://chronicle.com/article/Behind-the-Webcams-Watchful/138505/
Hailey Schnorr has spent years peering into the
bedrooms, kitchens, and dorm rooms of students via Webcam. In her job
proctoring online tests for universities, she has learned to focus mainly on
students' eyes.
"What we look for is eye movement," says Ms.
Schnorr. "When the eyes start veering off to the side, that's clearly a red
flag."
Ms. Schnorr works for ProctorU, a company hired by
universities to police the integrity of their online courses.
ProctorU is part of a cottage industry of online
proctoring providers that has grown in recent years as colleges and
universities have set their sights on "nontraditional" students who want to
earn degrees without leaving home.
The old biases against online education have begun
to erode, but companies that offer remote-proctoring services still face an
uphill battle in persuading skeptics, many of whom believe that the duty of
preserving academic integrity should not be entrusted to online watchers who
are often thousands of miles from the test-takers. So ProctorU and other
players have installed a battery of protocols aimed at making their systems
as airtight as possible.
The result is a monitoring regime that can seem a
bit Orwellian. Rather than one proctor sitting at the head of a physical
classroom and roaming the aisles every once in a while, remote proctors peer
into a student's home, seize control of her computer, and stare at her face
for the duration of a test, reading her body language for signs of
impropriety.
Even slight oddities of behavior often lead to
"incident reports," which the companies supply to colleges along with
recordings of the suspicious behavior.
Rebekah Lovaas, 24, served as a proctor at
Kryterion, another such company, for three years before being promoted to
operations analyst. When she first started, Ms. Lovaas said, the company's
methods struck her as "almost intrusive."
She was not alone. Teresa Fishman, director of the
International Center for Academic Integrity, a leading advocate for reliable
safeguards against cheating and a former police officer, said she favored
the approach of asking online students to come to a physical testing center
for exams. "To watch somebody in their room—that seems a little invasive to
me," she said.
Each online-proctoring company has developed its
own approach. Some monitor live feeds; others record students via Webcam and
watch the recordings. Some require students to share a view of their
computer monitor, and empower a proctor to override their cursor if
necessary; others simply make students install software that makes it
impossible to use Web browsers or chat programs while the exam is in
progress.
The companies make bold claims about their
effectiveness, arguing their services are not just equal to but better than
in-person proctoring. "The level of supervision over the Web is much more
intense," said William Dorman, chief executive at Kryterion. "Frankly," he
said, "we can spot any cheating."
Kryterion notes "aberrant behavior"—a test-taker
leaves his seat, or answers the phone, or some similar breach—in about 16
percent of the exams it monitors, said Mr. Dorman. This does not always mean
the students are cheating, but it does mean the university will be notified.
Software Secure, another company that works with
universities, classifies such "incidents" into three tiers. The company's
subcontractor in India, Sameva Global, said it notes "minor suspicions" in
50 percent of exams; "intermediate" suspicions in 20 to 30 percent; and
"major" incidents in 2 to 5 percent. Creating Standards
The availability of these options raises a question
for all universities: How much proctoring is enough?
Higher-education institutions are expected to
certify academic achievement. But how they do that has been left largely
unregulated.
Federal officials, when drafting the Higher
Education Opportunity Act of 2008, specifically avoided detailing proctoring
requirements for online education, said Mollie McGill, a deputy director at
the Western Interstate Commission for Higher Education's Cooperative for
Educational Technologies. When it came to policing online exams, the
rule-making committee elected to avoid regulatory language that would favor
any specific technology or practice, said Ms. McGill.
The result was a minimum standard for compliance—a
secure login and password—that has left online programs largely to their own
devices, she said.
The emergence of massive open online courses has
brought new attention to ensuring integrity in a global online classroom.
The American Council on Education, a Washington-based group that advises
college presidents on policy, recently put ProctorU's protocols under the
microscope as part of its review of five MOOCs from Coursera.
"In general our standard was that we wanted to see
something that was at least as good or better than what you would see in a
large lecture class," said Cathy A. Sandeen, vice president for education
attainment and innovation at the council. That was the basic guidance the
council gave the professors it enlisted to judge whether students who
succeed in the Coursera MOOCs should be awarded transfer credit from
degree-granting universities. All five courses earned a seal of approval, in
what was a big moment for ProctorU and for online proctoring in general. The
Proctoring Life
At ProctorU's office in Livermore, Calif., Ms.
Schnorr and her colleagues report to work wearing color-coded polo shirts:
black for managers, blue for proctors, white for trainees. The proctors'
workspaces are identical, she said, each with a computer and two monitors,
and bear none of the family photos or other accouterments that adorn a
typical lived-in cubicle; employees do not have regular workspaces, says Ms.
Schnorr, they just take whatever workspace is open. The shifts typically
last four hours, including a 10-minute break, although proctors sometimes
work double shifts.
Watching people take tests can be dull work. Three
proctors interviewed by The Chronicle said most incidents were routine,
often the result of a misunderstanding. But occasionally a student will try
to outwit the system—or simply throw proctors for a loop.
Continued in article
Update Technology for Proctoring Distance Examinations
HI Bob,
Thanks for getting back to
me that link was very useful. Perhaps I can provide you another tool to
prevent that issue from occurring. I would like to introduce you to
ProctorU. ProctorU
is an online proctoring service that allows test-takers to take
their examinations from home while maintaining the academic integrity of
the institution. To address your concern on student verification we are
able to authenticate
the test taker's identity using a data-driven process that asks
questions about previous address history, phone numbers, and other
information pulled from our data partner. If you have some free time
tomorrow or next week I would be happy to discuss this further with you.
I look forward to hearing from you.
Patrick Ochoa
Partnership
Coordinator
ProctorU, Inc
Bob Jensen's threads on various ploys used to "proctor" distance education
examinations ---
http://www.trinity.edu/rjensen/Assess.htm#OnlineOffCampus
I must be psychic, because I've been saying this all along ---
http://faculty.trinity.edu/rjensen/255wp.htm
So has Amy Dunbar ---
http://www.cs.trinity.edu/~rjensen/002cpe/Dunbar2002.htm
"The Medium is Not the Message," by Jonathan Kaplan, Inside Higher Ed,
August 11, 2009 ---
http://www.insidehighered.com/views/2009/08/11/kaplan
A few weeks ago, the U.S. Department of Education
released a report that looked at 12 years' worth of education studies, and
found that online learning has clear advantages over face-to-face
instruction.
The study, "An Evaluation of Evidence-Based Practices in Online Learning: A
Meta-Analysis and Review of Online Learning Studies," stated that “students
who took all or part of their class online performed better, on average,
than those taking the same course through traditional face-to-face
instruction.”
Except for one article,
on this Web site,
you probably didn’t hear about it -- and neither did anyone else.
But imagine for a moment that the report came to the opposite conclusion.
I’m sure that if the U.S. Department of Education had published a report
showing that students in online learning environments performed worse,
there would have been a major outcry in higher education with calls to shut
down distance-learning programs and close virtual campuses.
I believe the reason that the recent study elicited so little commentary is
due to the fact that it flies in the face of the biases held by some across
the higher education landscape. Yet this study confirms what those of us
working in distance education have witnessed for years: Good teaching helps
students achieve, and good teaching comes in many forms.
We know that online learning requires devout attention on the part of both
the professor and the student -- and a collaboration between the two -- in a
different way from that of a face-to-face classroom. These critical aspects
of online education are worth particular mention:
- Greater student engagement: In an
online classroom, there is no back row and nowhere for students to hide.
Every student participates in class.
- Increased faculty attention: In most
online classes, the faculty’s role is focused on mentoring students and
fostering discussion. Interestingly, many faculty members choose to
teach online because they want more student interaction.
- Constant access: The Internet is open
24/7, so students can share ideas and “sit in class” whenever they have
time or when an idea strikes -- whether it be the dead of night or
during lunch. Online learning occurs on the student’s time, making it
more accessible, convenient, and attainable.
At Walden University, where
I am president, we have been holding ourselves accountable for years, as
have many other online universities, regarding assessment. All universities
must ensure that students are meeting program outcomes and learning what
they need for their jobs. To that end, universities should be better able to
demonstrate -- quantitatively and qualitatively -- the employability and
success of their students and graduates.
Recently, we examined the
successes of Walden graduates who are teachers in the Tacoma, Wash., public
school system, and found that students in Walden teachers’ classes tested
with higher literacy rates than did students taught by teachers who earned
their master’s from other universities. There could be many reasons for
this, but, especially in light of the U.S. Department of Education study, it
seems that online learning has contributed meaningfully to their becoming
better teachers.
In higher education, there
is still too much debate about how we are delivering content: Is it online
education, face-to-face teaching, or hybrid instruction? It’s time for us to
stop categorizing higher education by the medium of delivery and start
focusing on its impact and outcomes.
Recently, President Obama remarked, “I think there’s a possibility that
online education can provide, especially for people who are already in the
workforce and want to retrain, the chance to upgrade their skills without
having to quit their job.” As the U.S. Department of Education study
concluded, online education can do that and much more.
But Kaplan above ignores some of the dark side aspects of distance education
and education technology in general ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
The biggest hurdle, in my opinion, is that if distance education is done
correctly with intensive online communications, instructors soon become burned
out. In an effort to avoid burn out, much of the learning effectiveness is lost.
Hence the distance education paradox.
Kaplan also ignores some of the strong empirical support for online
learning, especially the enlightening SCALE experiments at the University of
Illinois ---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
August 11, 2009 reply from Steve Markoff
[smarkoff@KIMSTARR.ORG]
Bob:
I've always believed that the
role of the teacher is one of FACILITATOR. My role in the classroom is
making it EASIER for information to move from one place to another - from
point A to point B. This could be from textbook to student, it could be
from the outside world to the student, from another student to the student,
from the student him or herself to that same student AND from teacher to
student (me to them). In defining the word 'teaching', I think many people
overemphasize the last transition that I mentioned, thinking that the
primary movement of information is from them(the teacher) to the students.
In fact, it constitutes a minority of total facilitated information flow in
a college classroom. I think this misunderstanding leads many to
underestimate the value of other sources in the education process other than
themselves. Online content is just one of many alternative sources.
Unfortunately, online formats do
allow certain professors to hide behind the electronic cloak and
politely excuse themselves from the equation, which greatly hurts the
student. Also, online formats can be fertile ground for professors who lack
not only the desire to 'teach' but the ability and thus become mere
administrators versus teachers.
steve
Hi John and
Pat and Others,
I would
not say that out loud to Amy Dunbar or Denny Beresford that they’re easy
graders ---
http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm
I would
not say that out loud to the graduates of two principles of accounting
weed out courses year after year at Brigham Young University where
classes meet on relatively rare occasion for inspiration about accountancy
but not technical learning ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
Try to
tell the graduates of Stanford University’s ADEPT Masters of Electrical
Engineering program that they had an easier time of it because the entire
program was online.
There’s
an interesting article entitled how researchers misconstrue causality:
Like
elaborately plumed birds … we preen and strut and display our t-values.”
That was Edward Leamer’s uncharitable description of his profession in 1983.
“Cause
and Effect: Instrumental variable help to isolate causal relationships, but
they can be taken too far,” The Economist, August 15-21, 20098 Page
68.
It is
often the case that distance education courses are taught by non-tenured
instructors, and non-tenured instructors may be easier with respect to
grading than tenured faculty because they are even more in need of strong
teaching evaluations --- so as to not lose their jobs. The problem may have
nothing whatsoever to do with online versus onsite education ---
ergo misconstrued causality.
I think
it’s very rewarding to look at grading in formal studies using the
same full-time faculty teaching sections of online versus onsite students.
By formal study, I mean using the same instructors, the same materials, and
essentially the same examinations. The major five-year, multimillion dollar
study that first caught my eye was the SCALE experiments on the campus of
the University of Illinois where 30 courses from various disciplines were
examined over a five year experiment.
Yes the
SCALE experiments showed that some students got higher grades online,
notably B students who became A students and C students who became A
students. The online pedagogy tended to have no effect on D and F students
---
http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
Listen to
Dan Stone’s audio about the SCALE Experiments ---
http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm
But keep
in mind that in the SCALE experiments, the same instructor of a course was
grading both the online and onsite sections of the same course. The reason
was not likely to be that online sections were easier. The SCALE experiments
collected a lot of data pointing to more intense communications with
instructors and more efficient use of student’s time that is often wasted in
going to classes.
The
students in the experiment were full time on campus students, such that the
confounding problems of having adult part-time students was not a factor in
the SCALE experiments of online, asynchronous learning.
A
Statement About Why the SCALE Experiments Were Funded
ALN = Asynchronous Learning
We are particularly interested in new
outcomes that may be possible through ALN. Asynchronous computer networks
have the potential to
improve contact
with faculty, perhaps making
self-paced learning a realizable goal for some off- and on-campus students.
For example, a motivated student could progress more rapidly toward a
degree. Students who are motivated but find they cannot keep up the pace,
may be able to slow down and take longer to complete a degree, and not just
drop out in frustration. So we are interested in what impact ALN will have
on outcomes such as time-to-degree and student retention. There are many
opportunities where ALN may contribute to another outcome: lowering the cost
of education, e.g., by naturally introducing new values for old measures
such as student-faculty ratios. A different kind of outcome for learners who
are juggling work and family responsibilities, would be to be able to earn a
degree or certification at home. This latter is a special focus for us.
Alfred P.
Sloan Foundation's Program in
Learning Outside the Classroom at
http://w3.scale.uiuc.edu/scale/
Another
study that I love to point to was funded by the Chronicle of Higher
Education. Read about when one of the Chronicle’s senior editors
took a Governmental Accounting Course at the University of Phoenix during
which the instructor of the course had not idea that Goldie
Blumenstyk
was assessing how difficult or how easy the course was for students in
general. I think Goldie’s audio report of her experience is still available
from the Chronicle of Higher Education. Goldie came away from the
course exhausted.
The Chronicle's Goldie Blumenstyk has covered
distance education for more than a decade, and during that time she's
written stories about
the economics of for-profit education, the ways that online institutions
market themselves, and the demise of
the 50-percent rule. About the only thing she hadn't done, it seemed,
was to take a course from an online university. But this spring she finally
took the plunge, and now she has completed a class in government and
nonprofit accounting through the University of Phoenix. She shares tales
from the cy ber-classroom -- and her final grade --
in a podcast with Paul Fain, a Chronicle reporter.
Chronicle of Higher Education, June 11, 2008 (Audio) ---
http://chronicle.com/media/audio/v54/i40/cyber_classroom/
·
All course materials (including textbooks) online; No additional
textbooks to purchase
·
$1,600 fee for the course and materials
·
Woman instructor with respectable academic credentials and experience
in course content
·
Instructor had good communications with students and between students
·
Total of 14 quite dedicated online students in course, most of whom
were mature with full-time day jobs
·
30% of grade from team projects
·
Many unassigned online helper tutorials that were not fully utilized
by Goldie
·
Goldie earned a 92 (A-)
·
She gave a positive evaluation to the course and would gladly take
other courses if she had the time
·
She considered the course to have a heavy
workload
"U. of Phoenix Reports on Its Students' Academic
Achievement," by Goldie Blumenstyk, Chronicle of Higher Education,
June 5, 2008 ---
http://chronicle.com/daily/2008/06/3115n.htm?utm_source=at&utm_medium=en
The 2006 National Survey of Student Engagement, released November 13,
2006, for the first time offers a close look at distance education, offering
provocative new data suggesting that e-learners report higher levels of
engagement, satisfaction and academic challenge than their on-campus peers
---
http://nsse.iub.edu/NSSE_2006_Annual_Report/index.cfm
"The Engaged E-Learner," by Elizabeth Redden, Inside Higher Ed,
November 13, 2006 ---
http://www.insidehighered.com/news/2006/11/13/nsse
At the University of Wisconsin
"Online Degree Program Lets Students Test Out of What They Already Know,"
by Angela Chen, June 20, 2012 ---
Click Here
http://chronicle.com/blogs/wiredcampus/online-degree-program-lets-students-test-out-of-what-they-already-know/37097?cid=wc&utm_source=wc&utm_medium=en
The University of Wisconsin plans to start a
“flexible degree” program online focused on allowing undergraduates to test
out of material they have mastered.
The new program, geared toward working adults with
some college education, operates under a “competency based” model, said
Raymond Cross, chancellor of the University of Wisconsin Colleges and
University of Wisconsin-Extension. This model is similar to the Advanced
Placement program, in which high-school students take AP tests to pass out
of college-level courses.
In the university’s new program, college courses
will be broken down into units. For example, a higher-level mathematics
class could include units such as linear algebra and trigonometry. Students
can then test out of certain units (instead of full courses) and spend time
learning only material that is new to them. Eventually, the units will build
into courses, and then a degree. The flexible-degree program and
traditional-degree program will have identical course requirements, and
since each flexible degree will be associated with a specific campus, the
student will receive a diploma from the originating campus and not from the
system.
“We’re trying to find ways to reduce the cost of
education,” Mr. Cross said. “Implicit in the model is the idea that you can
take lectures online from free sources—like Khan Academy and MITx—and
prepare yourself for the competency test. Then take the remaining courses
online at UW.”
The biggest challenge, he says, is determining how
to best test competency. Some units will require tests, while others may
require written papers or laboratory work. The difficulty of measuring
“competency’” for any unit will affect the program’s pricing structure,
which has not yet been determined.
The idea of competency-based credentials is common
in technical and health fields, Mr. Cross said, but it is rare at
traditional universities. The program is part of a push to encourage
Wisconsin’s 700,000 college dropouts to go back to a university.
“With higher ed now, people often have a piece or
two missing in their education, so we are responding to the changes in our
culture and helping them pull all these pieces together,” Mr. Cross said.
“Students already interface with a lot of different institutions and
different classes and professors, and this will help that process. I don’t
think this diminishes traditional higher ed at all. I think it’ll enhance
it.”
The first courses in the flexible-degree program
will be available starting in fall 2013. The university is still developing
exact degree specifications, Mr. Cross said. Likely degrees include business
management and information technology.
Bob Jensen's threads on distance education training and education
alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"Online Courses Should Always Include Proctored Finals, Economist Warns,"
by David Glenn, Chronicle of Higher Education, May 10, 2011 ---
Click Here
http://chronicle.com/blogs/wiredcampus/online-courses-should-always-include-proctored-finals-economist-warns/31287?sid=wc&utm_source=wc&utm_medium=en
Online economics students do not absorb much
material from homework and chapter tests during the semester—perhaps because
they expect to be able to cheat their way through the final exam. That is
the lesson from a
study that Cheryl J. Wachenheim, an associate
professor of agribusiness and applied economics at North Dakota State
University, will present in July at the annual meeting of the Agricultural
and Applied Economics Association.
Ms. Wachenheim is no enemy of distance education.
As The Chronicle
reported in 2009, she continued to teach her
online courses even during a National Guard deployment to Iraq. But she has
noticed that her online students perform much worse than their
classroom-taught counterparts when they are required to take a proctored,
closed-book exam at the end of the semester.
In her study—a
previous version of which appeared in the
Review of Agricultural Economics—Ms. Wachenheim looked at the
performance of students in six sections of introductory-economics courses at
North Dakota State. In online sections whose final exam was unproctored and
open book, students’ exam grades were roughly the same as those of
classroom-based students who took proctored, closed-book finals. But online
sections that were asked to take proctored, closed-book final exams
performed at least 15 points worse on a 100-point scale.
Ms. Wachenheim fears that students in those
unproctored online sections really weren’t learning much, even though their
grades were fine. In self-paced courses, many students appeared to cram most
of the homework and chapter exams into the final week of the semester. Few
of them bothered to do the ungraded practice problems offered by the online
publisher.
Then there is the question of cheating. Ms.
Wachenheim’s study did not gather any direct evidence, but she reports
anecdotally that students have told her how they work in groups to compile
huge caches of the publishers’ test-bank questions. She quotes one student
as saying, “We may not learn the material, but we are guaranteed an A.”
Ms. Wachenheim’s findings parallel those of a
2008 study in the Journal of Economic
Education. That study found indirect evidence that students cheat on
unproctored online tests, because their performance on proctored exams was
much more consistent with predictions based on their class ranks and their
overall grade-point averages.
Continued in article
Jensen Comment
Since proctors are easily distracted and often miss peeking at cribbed notes and
angled vision and pass the trash kinds of cheating, well placed videos can often
be better than proctoring, especially when set up in individual cubicles.
Udacity ---
http://en.wikipedia.org/wiki/Udacity
Pearson PLC ---
http://en.wikipedia.org/wiki/Pearson_PLC
"Udacity to partner with Pearson for testing: What does this mean?" by
Robert Talbert, Chronicle of Higher Education, June 2, 2012 ---
Click Here
http://chronicle.com/blognetwork/castingoutnines/2012/06/02/udacity-to-partner-with-pearson-for-testing-what-does-this-mean/?sid=wc&utm_source=wc&utm_medium=en
Online educational startup
Udacity,
with whom I had a very positive experience
while taking their CS 101 course, is
taking things a bit further by partnering with Pearson.
They’ll be using
Pearson VUE testing centers worldwide to provide
proctored final exams for some of their courses (presumably all of their
courses will be included eventually), leading to an official credential and
participation in a job placement service.
Before, students watched the videos and did
homework assignments online and then took a final exam at the end of the
semester. In the first offering of CS 101, the “grade” for the course (the
kind of certificate you got from Udacity) depended on either an average of
homework scores and the final exam or on the final exam alone. Most Udacity
courses these days just use the final exam. But the exam is untimed and
unproctored, and there’s absolutely nothing preventing academic dishonesty
apart from the integrity of the student.
That’s not a great recipe for viable credentialing.
For people like me, who want the knowledge but don’t really need the
credentials, it’s enough, and I found their CS 101 course to be exactly the
right level for what I needed to learn. But if you’re an employer, you’d
want to have something a little more trustworthy, and so this is a logical
move for Udacity. It’s also a significant step towards establishing
themselves as more than just a web site with instructional videos.
The natural question for people like me is, what
does this mean for traditional higher education? Personally, I’m not
worried, because I teach at an
institution
that provides way more than just credentialing for job
placement. That’s not to downplay the importance of credentialing or job
placement — but that sort of thing is fundamentally different than a
university education, or at least a university education that hasn’t
forsaken its mission. Higher ed is a rich and complex ecosystem, and
universities don’t really compete in the same space as providers like
Udacity even with the sort of credentialing they’re describing. In fact
there could be opportunities for useful partnerships between universities
and online providers. Udacity certainly makes use of the university
professoriate to power its content delivery.
On the other hand, Udacity’s move should be a
warning to those institutions who have moved toward a credentialing + job
placement model: Your space is being invaded by a viable competitor who can
offer the same product for much less money.
Onsite Versus Online Education (including controls for online examinations
and assignments) ---
http://faculty.trinity.edu/rjensen/Assess.htm#OnsiteVersusOnline
Bob Jensen's threads on Udacity and other alternatives for educating the
masses ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
"To Monitor Online Testing, Western Governors U. Gives Students Webcams,"
by Alexandra Rice, Chronicle of Higher Education, November 2, 2011 ---
http://chronicle.com/blogs/wiredcampus/to-monitor-online-testing-western-governors-u-gives-students-webcams/34099
Welcome packets for students at Western Governors
University now include a free Webcam, part of an extensive monitoring
program used by the online university to make sure test-takers are who they
say they are.
At Western Governors, the average student is 36
years old, has a family, and takes a full course load on top of holding a
full-time job. Because it’s convenient for them to be able to take tests
from home, students have embraced the technology, says Janet W. Schnitz,
associate provost for assessment and interim provost at the university.
The university, which first started handing out
cameras in July 2010, now has over 30,000 Web cams in use.
Before 2009, when the university introduced its
Webcam pilot program, students had to go to one of 6,000 on-site assessment
centers to take a test. For many students, this could involve taking time
off work, securing a babysitter, and then driving several hours to the
center.
“Trying to get to different sites to take these
exams—that took up to four hours to complete—was quite onerous on the
students,” Ms. Schnitz said. “So we began looking for a secure environment
that would allow us to identify the student and provide a secure testing
environment that was more conducive to the lifestyle of our adult students.”
The camera, which is mounted on a stick, is not the
standard Web camera found on a computer. Standard Webcams, Ms. Schnitz said,
provide only a view of the student. With this camera, proctors can see the
computer screen, the students’ hands and profile, and a 180-degree view of
the room.
While the university is still working out some bugs
in the system, such as full compatibility with Apple products and issues
with satellite Internet connections, Ms. Schnitz says the transition has
been fairly seamless and beneficial for both the university and its
students. The system the university uses, known as Webassessor, was
developed by the online testing technology company Kryterion.
“The one thing I think that really helps us the
most is that they have full streaming and live proctors who are actually
watching the students during the entire testing event,” Ms. Schnitz said.
“We really felt that it was important that it not be viewed after the fact,
and that it be viewed during the actual testing.”
The idea behind the live proctor is twofold: to
have someone monitoring students and checking for any aberrant behavior and
also to have someone there in case a student has a technical issue.
Students’ dress is another issue the university is
still working out when using the cameras, Ms. Schnitz said. Before beginning
an exam, the student’s hair has to be pulled fully behind his or her ears to
make sure they don’t have any device feeding them answers. For some
students, such as those who wear headscarves for religious reasons, this can
present a problem. In those cases, the university can arrange for female
proctors or students can choose to take the test at one of the on-site
centers.
The university administers roughly 2,000 of the
10,000 tests it gives each month at physical testing centers, and the rest
through the Webcam system, according to Ms. Schnitz.
Continued in article
Jensen Comment
Since WGU is a competency-based university, instructors do not assign final
grades. This makes testing integrity doubly important since final grades are
based upon examination performance throughout the term.
Creative Commons ---
http://en.wikipedia.org/wiki/Creative_Commons
Creative Commons Home Page ---
http://creativecommons.org/
Creative Commons Directory of Resources ---
http://wiki.creativecommons.org/Content_Curators
Creative Commons Free Video ---
http://wiki.creativecommons.org/Content_Curators
Bob Jensen's threads on global online training and education alternatives
---
http://faculty.trinity.edu/rjensen/Crossborder.htm
The New University of Illinois Online Global Campus
Online-education venture at the U. of Illinois tries to distinguish itself
from other distance-learning programs
"The Global Campus Meets a World of Competition," by Dan Turner, The
Chronicle of Higher Education's Chronicle Review, April 3, 2009 ---
http://chronicle.com/free/v55/i30/30a01001.htm
The University of Illinois Global Campus, a
multimillion-dollar distance-learning project, is up and running. For its
March-April 2009 term, it has enrolled 366 students.
Getting to this point, though, has looked a little
like the dot-com start-up bubble of the late 1990s. Hundreds of
Internet-related companies were launched with overly ambitious goals, only
to later face cutbacks and other struggles to stay alive. Most crashed
anyway. Some observers now say the Global Campus must try to avoid the same
fate of churning through a large initial investment while attracting too few
customers.
The project, planned about four years ago, was
designed to complement existing online programs offered by individual
Illinois-system campuses at Urbana-Champaign, Springfield, and Chicago.
Those programs primarily serve current students as an addition to their
on-campus course work. The Global Campus, in contrast, seeks to reach the
adult learner off campus, who is often seeking a more focused,
career-related certification or degree, such as completing a B.S. in
nursing.
Online education has proved popular with
institutions, students, and employers across the United States, with
opportunities and enrollment growing. According to the Sloan Consortium, a
nonprofit organization focused on online learning, the fall 2007 term saw
3.9 million students enroll in at least one online course, many at
for-profit institutions like DeVry University and the University of Phoenix.
That growing popularity, says David J. Gray, chief
executive of UMassOnline, the online-learning arm of the University of
Massachusetts system, is part of the Global Campus's problem. The Illinois
program, he says, is "fighting uphill in a market that's a lot more uphill."
The slope didn't seem as steep in the fall of 2005,
when Chester S. Gardner, then the university's vice president for academic
affairs, led a committee to investigate ideas for the future of online
education at Illinois. That resulted in a proposal and business plan
presented to the Board of Trustees the next year. The system's "existing
online programs were not structured for adult learners," says Mr. Gardner,
who is now leading the Global Campus.
The program was formally established in March 2007.
The university initially financed it with $1.5-million of general revenue.
The program started teaching its first 12 students in 2008.
Now, Mr. Gardner says, the Global Campus has a
budget of approximately $9.4-million for the 2008-9 fiscal year.
Approximately $1-million of that comes from the state, he says, and the
remaining money comes from various grants, tuition, and loans from the Board
of Trustees.
The trustees' investment has produced heavy
involvement, Mr. Gardner says. "They're acting like venture capitalists," he
notes, adding that "they're certainly doing their job of holding my feet to
the fire."
This year the 366 Global Campus students are
enrolled in five different degree and four different certificate programs;
Mr. Gardner expects the number of students to rise to around 500 by May.
Those numbers put the program on a much slower
track than earlier, sunnier estimates of 9,000 students enrolled by 2012.
Mr. Gardner says the 9,000 figure came from his 2007 budget request to the
trustees and was not precise. "We had no direct experience upon which to
base our projections," he says.
Now, Mr. Gardner says, he has more realistic
figures. Once 1,650 students are enrolled, the monthly income from tuition
will equal monthly expenses, on average. His current projections show the
Global Campus reaching that point of stability by the 2011 fiscal year.
Continued in article
Bob Jensen's threads on cross border distance education and training
alternatives ---
http://faculty.trinity.edu/rjensen/crossborder.htm
Bob Jensen's threads on education technology and distance education ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
Good Luck Jack (and Suzi): You're Going to Need All the Luck You Can Get
"Jack Welch Launches Online MBA: The legendary former GE CEO says he knows a
thing or two about management, and for $20,000 you can, too," by Geoff
Gloeckler, Business Week, June 22, 2009 ---
http://www.businessweek.com/bschools/content/jun2009/bs20090622_962094.htm?link_position=link1
A corporate icon is diving into the MBA world, and
he's bringing his well-documented management and leadership principles with
him. Jack Welch, former CEO at General Electric (GE) (and BusinessWeek
columnist), has announced plans to start an MBA program based on the
business principles he made famous teaching managers and executives in GE's
Crotonville classroom.
The Jack Welch Management Institute will officially
launch this week, with the first classes starting in the fall. The MBA will
be offered almost entirely online. Compared to the $100,000-plus price tag
for most brick-and-mortar MBA programs, the $600 per credit hour tuition
means students can get an MBA for just over $20,000. "We think it will make
the MBA more accessible to those who are hungry to play," Welch says. "And
they can keep their job while doing it."
To make the Jack Welch Management Institute a
reality, a group led by educational entrepreneur Michael Clifford purchased
financially troubled Myers University in Cleveland in 2008, Welch says.
Welch got involved with Clifford and his group of investors and made the
agreement to launch the Welch Management Institute.
Popularized Six Sigma For Welch, the new
educational endeavor is the latest chapter in a long and storied career. As
GE's longtime chief, he developed a management philosophy based on
relentless efficiency, productivity, and talent development. He popularized
Six Sigma, wasn't shy about firing his worst-performing managers, and
advocated exiting any business where GE wasn't the No. 1 or No. 2 player.
Under Welch, GE became a factory for producing managerial talent, spawning
CEOs that included James McNerney at Boeing (BA), Robert Nardelli at
Chrysler, and Jeff Immelt, his successor at GE.
Welch's decision to jump into online education
shows impeccable timing. Business schools in general are experiencing a rise
in applications as mid-level managers look to expand their business acumen
while waiting out the current job slump. The new program's flexible
schedule—paired with the low tuition cost—could be doubly attractive to
those looking to move up the corporate ladder as the market begins to
rebound.
Ted Snyder, dean of the University of Chicago's
Booth School of Business, agrees. "I think it's a good time for someone to
launch a high-profile online degree," Snyder says. "If you make the
investment in contentthat allows for a lot of interaction between faculty
and students and also among students, you can get good quality at a much
more reasonable tuition level."
Welch's Secret Weapon That being said, there are
challenges that an online MBA program like Welch's will have a difficult
time overcoming, even if the technology and faculty are there. "The
integrity and quality of engagement between faculty and students is the most
precious thing we have," Snyder says. "Assuming it's there, it dominates.
These things are hard to replicate online."
But Welch does have one thing that differentiates
his MBA from others: himself. "We'll have all of the things the other
schools have, only we'll have what Jack Welch believes are things that work
in business, in a real-time way," he says. "Every week I will have an online
streaming video of business today. For example, if I was teaching this week,
I would be putting up the health-care plan. I'd be putting up the financial
restructuring plan, talking about it, laying out the literature, what others
are saying, and I'd be talking about it. I'll be doing that every week."
Welch and his wife Suzy are also heavily involved
in curriculum design, leaning heavily on the principles he used training
managers at GE.
Continued in Article
Jensen Comment
There are at least three enormous obstacles standing in the way of the
super-confident Jack Welch on this one. I should mention that I've never been a
Jack Welch fan and am especially disturbed that he is the world's leader in
platinum retirement perks that, in my opinion, go way beyond his value in the
past and future to GE. But I will try to not let my prejudices bias my remarks
below.
- This raises the question of why students choose one MBA program over
another after being admitted to several. For example, suppose a student has
not yet made a decision about accepting MBA program offers at Harvard,
Wharton, Stanford, Claremont, or the Jack (and Suzi) Welch Management
Institute. Assume location and climate are of no concern in this choice.
Some years back the relatively new Claremont MBA program assumed that the
worldwide reputations of faculty were the most important draw for new
students. So they hired at least one big name in each of the business
disciplines, the most notable of which was the famous Peter Drucker.
I won't go into details here and Claremont has a very respected MBA program,
but it has had huge problems attracting enough top students. The reason
quite simply, in my viewpoint, is that students choose MBA programs for
reasons other than reputations of faculty. Of course they assume that a top
MBA program has hired top faculty, but reputations of individual faculty are
not why they choose Stanford over Harvard or Wharton over Claremont. The
choose MBA programs for many of the reasons that led to top MBA programs in
U.S. News or the WSJ. They want high paying opportunities for
fast track wealth, and they assume the last five decades of established
success in that regard makes an MBA program the best for them. They also
want to be among the best students and alumni in the world, because they
feel that networking with current students and active alumni is a leading,
if not the leading, factor for career advancement opportunity.
Having a few big names on the faculty just does not cut it relative to the
more important factors when top students seek out an MBA program. The same
can be said to a somewhat lesser extent when choosing a doctoral studies
program. In the latter case, an applicant is often heavily influenced by a
current or former Professor X who recommends the doctoral program at
University Y because Professor Z happens to be a leading research advisor at
University Y. This is not the case for MBA students in most instances.
- If you're starting up an MBA program, an online MBA program is probably
a good idea. This will attract some high GMAT applicants who, for whatever
reason, just cannot leave town to become a full-time student in another
locale. But at the same time, an online MBA program is a turn off to other
top prospects. Some of the reasons were mentioned above. In addition, online
degree programs still have a stigma that online degrees are inferior (even
though many studies, such as the SCALE Experiment at Illinois, suggest that
online learning may be better if online instruction is excellent. Equally
important is that potential employers generally recruit more aggressively in
reputable onsite MBA programs. Jack Welch will have more success if he can
get inside tracks for his graduates to roll into the top jobs. Somehow I
doubt that he can do this for more than a handful of graduates vis-a-vis the
competition from the top 50 MBA programs ranked by U.S. News and the
WSJ.
- The timing could not be worse for starting a MBA Program. Top programs
at Harvard, Stanford, Wharton, etc. are having trouble placing their
students, including their top students after Wall Street virtually imploded
and we're in probably the worst job market since the 1930s. This June, 80%
of the nation's undergraduates seeking employment could not find jobs for
which a college education is required. I suspect the situation is even worse
for the nation's MBA programs in terms of graduates who did not already have
satisfactory jobs before entering an MBA program. Some enter such programs
with jobs such as when a career military officer decides to go for an MBA on
the side.
In any case it will be interesting to track the progress of the Jack Welch
Management Institute. I would applaud if it becomes one of the best online
degree programs in the world, because I highly support the development of more
and better online training and education programs in the world ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
The Official Website of the Jack (and Suzi) Welch Management Institute is at
http://www.welchway.com/
Online Learning Tips & Online College Reviews ---
http://www.onlinecollege.org/
CHOOSE AN ACCREDITED ONLINE
SCHOOL
An
important factor to consider is accreditation.
Traditional colleges and universities have long been
evaluated by educational accreditors who ensure that
their programs meet certain levels of quality. Regional
and national organizations now accredit online programs
too. In the United States, online colleges that are
fully accredited have been recognized by one of six
regional accreditation boards that also evaluate
traditional campuses. These include:
In addition, the U.S.
Department of Education and
the
Council for Higher Education Accreditation
(CHEA) recognize the
Distance Education and Training
Council
(DETC) as a reputable accreditor for education programs
that offer online degrees. Once an online program
becomes accredited, it’s more likely that a traditional
school will accept its transfer credits and that
employers will recognize its value.
HOW TO CHOOSE AN ONLINE SCHOOL
How should someone select an online school? Just as
students have different priorities when choosing
physical campuses, they will have different criteria for
choosing an online institution. For example:
-
Prestige. Some students need a
degree from a prestigious university in order to
advance in their particular field. Others are not
concerned with elite reputations; as long as their
program is accredited, it will move them forward.
-
Expense. Some students wish to find
schools that offer the most financial aid or have
low tuition, but others - such as people with
education benefits from the military - needn’t take
cost into account.
-
Pace. Some people want to earn
their online degree as quickly as possible. They
seek accelerated degree programs or those that will
accept their previously-earned academic credits or
grant credit for life experiences (e.g., military
training). Other people prefer to learn at a slower
pace.
Clearly, the variation among individual’s means that
there will be variation among any rankings that people
would assign to online institutions. At the same time,
it is helpful to consider as a starting point another’s
list of top online schools. The twenty online schools
presented below are all accredited by one of the six
aforementioned accrediting bodies. Factors such as
tuition, reputation, academic awards, and range of
degree programs have also been taken into account.
TOP TWENTY ONLINE COLLEGE SCHOOLS
1.
Western Governors University has an excellent
reputation; in 2008 it received the United States
Distance Learning Association’s 21st Century
Award for Best Practices in Distance Learning. The
school was founded by the governors of nineteen western
states and it’s accredited by the Northwest Commission
on Colleges and Universities.
This school is ideal for quick
learners who want an accelerated program. With
competency-based learning, students are able to progress
as quickly as they can demonstrate having mastered the
required knowledge.
A variety of online undergraduate and
graduate degrees are offered. Some examples include
baccalaureates and MBAs in business, 26 programs related
to teaching, and several nursing programs.
2.
The University of Phoenix is one of the
best-publicized online educators. It is accredited by
the Higher Learning Commission. In addition to being
experienced with web-based instruction, the University
of Phoenix has physical campuses across the United
States. As of 2008 it was the nation’s largest private
university and had an enrollment of nearly 350,000
students. The university offers more than 100 degree
programs at the
associate’s,
bachelor’s,
master’s and
doctoral levels.
3.
Florida Tech University Online is accredited by the
Commission on Colleges of the Southern Association of
Colleges and Schools. It has been ranked as a top
national university by
U.S. News & World Report, the Fiske Guide
to Colleges, and Barron’s Best Buys in College
Education. A special feature of instruction is the
MP3 downloads that allow students to take lectures away
from the computer.
Degrees are offered in business,
liberal arts, criminal justice, and healthcare. Special
discounts are available to members of the military and
their spouses.
4.
Capella University awards bachelor’s, master’s, and
doctoral degrees. The majority of students receive
financial aid that is unrelated to their income, and
many companies have such confidence in Capella
University that they pay for their employees’ tuition.
Degrees are awarded in: business;
computers and information technology; education and
teaching; health and medicine; the social sciences; and
criminal justice. Capella University is accredited by
the North Central Association of Colleges and Schools.
5.
Walden University is accredited by the North Central
Association of Schools and Colleges. In a 1999 review of
fully online schools, the business magazine Fast
Company awarded its only A grade to Walden
University. US News and World Report has
described Walden as well-regarded.
Walden offers a variety of
undergraduate and graduate degrees ranging from nursing
to information technology and business, including the
MBA.
6.
California Coast University is accredited by the
Distance Education and Training Council. California
Coast offers a unique self-paced program; courses are
not structured by semesters or other traditional
timeframes, so students are able to begin at any time of
year. Degrees are awarded in business, education and
teaching, health and nursing, the social sciences, and
criminal justice.
7.
South University has been educating students for
more than a century. It is accredited by the Southern
Association of Colleges and Schools and offers online
degrees in business, nursing, healthcare, criminal
justice, accounting, and information technology. With a
flexible scheduling program, students may take just one
course at a time or several concurrently for accelerated
learning.
8.
Drexel University was established as a traditional
campus in 1891. This Philadelphia-based institution was
named among the “Best National Universities” by
U.S. News & World Report. Drexel is accredited
by the Middle States Association of Colleges and
Schools.
Drexel University has offered online
education since 1996. Degrees granted include the MBA,
the Master of Science in Library & Information Science,
the Bachelor of Science in Nursing, and many others.
9.
Southern New Hampshire University is accredited by
the New England Association of Schools and Colleges. It
offers more than 50 programs leading to undergraduate
and graduate degrees and certificates. SNHU has been
named “Best of Business” by the New Hampshire
Business Review and in 2008 its business program
was deemed the best online program in its class.
10.
Vanderbilt University is a well-respected
institution with a physical campus founded in 1873. It
is accredited by the Commission on Colleges of the
Southern Association of Colleges and Schools.
As of 2008, Vanderbilt’s only fully
online program is the master’s degree in nursing
administration. This single program is worth mentioning
because America’s Best Graduate Schools ranks
Vanderbilt’s School of Nursing among the top nursing
programs offering master’s degrees.
11.
New England College was constructed in 1946 for
post-war education and is accredited by the New England
Association of Schools and Colleges. It offers online
master’s degrees in accounting, criminal justice
leadership, nonprofit leadership, and many other
subjects.
12.
Nova Southeastern University is the largest
independent university in Florida. It is accredited by
the Southern Association of Colleges and Schools and has
appeared on the Princeton Review’s list of the best
distance learning graduate schools. Nova Southeastern
offers online degrees in education and teaching.
13.
DeVry University’s Keller Graduate School of Management
awards a great number of business degrees in many
specialty areas such as accounting, human resource
management, and financial analysis. Students may choose
to take all of their courses online or combine online
learning with campus-based instruction.
14.
Baker University features relatively low tuition and
offers a wide variety of degrees at every level in
business, computers and IT, health and medicine, and
nursing. Baker is accredited by the North Central
Association of Colleges and Schools. Online learning
takes place using Blackboard, a system that creates an
online classroom setting in which instructors and
students can interact.
15.
Marist College has a physical campus in
Poughkeepsie, NY and is accredited by the Middle States
Association of Colleges and Schools. It offers online
degrees in communications, business, public
administration, information systems, and technology
management.
16.
Upper Iowa University is accredited by the North
Central Association of Colleges and Schools. It offers
degrees through campus-based learning in several states,
and its online programs include business, computers and
information technology, health, nursing, and the social
sciences.
17.
Ashford University, founded in 1918, offers
accelerated programs so that degrees can be earned in as
little as one year. Courses are 5-6 weeks long and are
taken one at a time. Examples of degrees include the
Bachelor of Arts in Psychology and a Master of Arts in
Organizational Management.
18.
Kaplan University was founded in 1937 and is
accredited by The Higher Learning Commission of the
North Central Association of Colleges and Schools. It
offers campus-based learning and also grants online
master’s, bachelor’s, associate’s, and professional law
degrees, as well as online certificate programs. Subject
areas include business, criminal justice, IT, and
paralegal studies.
19.
Northwestern University has been among the top
schools as ranked by U.S. News & World Report. Its
School of Continuing Studies offers an online
Master of Science in Medical Informatics online.
Students may also take distance learning courses in a
variety of other subjects.
20.
Liberty University is the world’s largest
evangelical Baptist university. In 2008 the Online
Education Database ranked Liberty third of all online
U.S. universities. More than 35 degree programs are
offered, including the Master of Arts in Marriage and
Family Therapy.
Jensen Comment
Although the above information is helpful, it should be emphasized that some of
the very best and largest online programs are really state-supported
universities not in the above ranking, including such universities as the
University of Wisconsin, the University of Maryland, the University of Illinois
(which has a new global online degree program), and virtually every other state
university in the United States. In most instances the large universities have
specialty degree programs not available in the above universities and sometimes
many more courses to choose from in a give specialty.
And there are some outstanding online community college programs not
mentioned above.
Bob Jensen's threads on global online training and education alternatives
---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"New Project Enlists Women to Help Women Learn Online," by Marc Parry,
Chronicle of Higher Education, April 29, 2009 ---
http://chronicle.com/wiredcampus/index.php?id=3738&utm_source=wc&utm_medium=en
Gail
Weatherly has gotten phone calls from women near tears over
their situations.
They’re taking care of kids. They can’t afford child care. They
can’t make it to regular classes. And they don’t know about
online learning, said Ms. Weatherly, distance-education
coordinator at
Stephen F. Austin State University, in
Nacogdoches, Tex.
Ms.
Weatherly hopes such women could one day benefit from a project
being developed by a scattered group of women involved in
distance education.
Their work centers on a social-networking Web site that would
allow women to share information about online education and
serve as mentors to one another. It’s called the Collaborative
Online Resource Environment for Women (Core4women), a
still-in-the-works effort that Ms. Weatherly and her colleagues
described during a workshop here Monday at the
national conference of the United States Distance Learning
Association.
The project, billed in the presentation as “A Better Way: Women
Telling Women About Online Learning,” evolved from Ms.
Weatherly’s dissertation research at Texas A&M University.
Studies like the American Association of University Women’s
“The Third Shift” had examined
barriers to women pursuing education. Ms. Weatherly sought to
push beyond that. She looked at how earning online degrees
changed women’s lives, sometimes in major ways, like one woman
who left an abusive relationship. In the process, Ms. Weatherly
encountered research subjects who wanted to share the expertise
they had gained with other women.
Long story short: Ms. Weatherly and some colleagues set up a
pilot project on the free social-networking site
Ning. A scattered group of female
mentors from the the world of distance education worked with a
small group of Texas college students, victims of abuse or
poverty, who signed up to help test the private site. The
project’s organizers hope to expand the effort and gain the
sponsorship of the
USDLA, which
has an offshoot called the
International Forum for Women in E-Learning.
A
Chronicle reporter was the only male in the audience Monday,
but two women present raised the subject of how the other sex
fits into this: Is there going to be a mentor network for men?
And why do they have to be separate? Why not Core4people?
In an
interview after the presentation, Ms. Weatherly responded by
returning to her research. Women shared experiences with her
that they might not have shared with a man: taking an online
class when they were expecting a child and very sick, for
example. Men might be participating more in care giving these
days. Largely, though, Ms. Weatherly said, “women still feel
like they would sacrifice going to school for their family.”
“Sometimes I think they need another woman to say, It’s OK for
you to work and take care of your children and earn a
degree – and you can do that easier by online learning,” Ms.
Weatherly said.
|
Also check out
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
This includes most of the recent AECM messaging
.You will also find added messaging (plus online
training and education alternatives) at
http://faculty.trinity.edu/rjensen/crossborder.htm
The broad spectrum of asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
This includes concerns about the explosion of asynchronous learning.
Dark side considerations ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
I think the following applies to education as well as any
business corporation. The problem is that universities are notoriously
slow to change relative to such organizations as business firms and the
military.
New Technology for Proctoring Distance Education Examinations
"Proctor 2.0," by Elia Powers, Inside Higher Ed, June 2, 2006 ---
http://www.insidehighered.com/news/2006/06/02/proctor
It’s time for final exams. You’re a student in
Tokyo and your professor works in Alabama. It’s after midnight and you’re
ready to take the test from your bedroom. No problem. Flip open your laptop,
plug in special hardware, take a fingerprint, answer the questions and
you’re good to go.
Just know this: Your professor can watch your every
move ... and see the pile of laundry building up in the corner of the room.
Distance learning programs – no matter their
structure or locations – have always wrestled with the issue of student
authentication. How do you verify that the person who signed up for a class
is the one taking the test if that student is hundreds, often thousands, of
miles away?
Human oversight, in the form of proctors who
administer exams from a variety of places, has long been the solution. But
for some of the larger distance education programs — such as Troy
University, with about 17,000 eCampus students in 13 time zones — finding
willing proctors and centralized testing locations has become cumbersome.
New hardware being developed for Troy would allow
faculty members to monitor online test takers and give students the freedom
to take the exam anywhere and at any time. In principle, it is intended to
defend against cheating. But some say the technology is going overboard.
Sallie Johnson, director of instructional design
and education technologies for Troy’s eCampus, approached Cambridge,
Mass.-based Software Secure Inc. less than two years ago to develop a unit
that would eliminate the need for a human proctor. Johnson said the hardware
is the university’s response to the urgings of both Congress and regional
accrediting boards to make authentication a priority.
The product, called Securexam Remote Proctor, would
likely cost students about $200. The unit hooks into a USB port and does not
contain the student’s personal information, allowing people to share the
product. The authentication is done through a server, so once a student is
in the database, he or she can take an exam from any computer that is
hardware compatible.
A fingerprint sensor is built into the base of the
remote proctor, and professors can choose when and how often they want
students to identify themselves during the test, Johnson said. In the
prototype, a small camera with 360-degree-view capabilities is attached to
the base of the unit. Real-time audio and video is taken from the test
taker’s room, and any unusual activity — another person walking into the
room, an unfamiliar voice speaking — leads to a red-flag message that
something might be awry.
Professors need not watch students taking the test
live; they can view the streaming audio or video at any time.
“We can see them and hear them, periodically do a
thumb print and have voice verification,” Johnson said. “This allows faculty
members to have total control over their exams.”
Douglas Winneg, president of Software Secure, said
the new hardware is the first the company has developed with the distance
learning market in mind. It has developed software tools that filter
material so that students taking tests can’t access any unauthorized
material.
Winneg, whose company works with a range of
colleges, said authentication is “a painful issue for institutions, both
traditional brick-and-mortar schools and distance learning programs.”
Troy is conducting beta tests of the product at its
home campus. Johnson said by next spring, the Securexam Remote Proctor could
commonly be used in distance learning classes at the university, with the
eventual expectation that it will be mandatory for students enrolled in
eCampus classes.
Bob Jensen's threads on emerging tools of our trade ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
Distance educators won’t have to become
FBI-style investigators, scanning fingerprints and
installing cameras in the apartments of online students to ensure that
people are who they say they are.
At least not yet.
The recently reauthorized Higher Education Act
required accreditors to monitor the steps that colleges take to verify that
an enrolled student is the same person who does the work, leaving distance
educators
worried they would have to buy expensive technology
to ensure that students didn’t have other people take
their tests. They feared the cost could be so high that programs would be in
danger.
But,
as
The Chronicle reports on its Web site today,
proposed federal regulations would allow colleges to
satisfy the mandate with techniques like secure log-ins and passwords or
proctored examinations, according to people involved in the negotiations
that ended last month.
After an emotional controversy that touched on
cheating, privacy, and Congress’s lingering discomfort with distance
education, some in the field are welcoming the developments.
Some distance educators believed they were being
held to a higher standard than their peers at bricks-and-mortar
institutions. And some technology vendors exacerbated the anxiety through
“purposeful distortion” of the law, said Fred B. Lokken, an associate dean
at Truckee Meadows Community College, in Nevada.
“There were companies who saw a chance here to get
their business base by, I think, exaggerating what the [act] was requiring
for distance-education programs,” said Mr. Lokken, chair of the
Instructional Technology Council, an affiliate of the American Association
of Community Colleges.
Do you think secure log-ins and passwords are
enough to verify a student’s identity?
Ideas for Teaching Online ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Ideas
Also see the helpers for teaching in general at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
In a previous edition of Tidbits, I provided a summary of resources for
learning how and being inspired to teach online ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Ideas
I forgot to (and have since added) helpers for assessment (e.g. testing)
online ---
http://faculty.trinity.edu/rjensen/assess.htm#OnlineOffCampus
Also see the helpers for assessment in general at
http://faculty.trinity.edu/rjensen/assess.htm
Question
Why do colleges have to identify each of their online students without the same
requirement imposed on onsite students?
"Unmuzzling Diploma Mills: Dog Earns M.B.A. Online," by Marc Parry,
Chronicle of Higher Education, September 23, 2009 ---
http://chronicle.com/blogPost/Unmuzzling-Diploma-Mills-Dog/8175/
How's this for "hounding" diploma mills?
GetEducated.com, an online-learning consumer group,
managed to purchase an online M.B.A. for its mascot, a dog named Chester
Ludlow.
The Vermont pug earned his tassles by pawing over
$499 to Rochville University, which offers "distance learning degrees based
on life and career experience," according to a
news release from GetEducated. He got back a
package from a post-office box in Dubai that contained a diploma and
transcripts, plus a certificate of distinction in finance and another
purporting to show membership in the student council.
GetEducated.com belives Chester is the first dog to
get a diploma for life experience. But his bow-wow M.B.A. isn't the first
canine college degree: Witness
this 2007 story about
a police-department dog's diploma.
Here's GetEducated.com's video about the stunt:
"Dog Earns Online MBA: A Cautionary Tail."
Bob Jensen's threads on diploma mills and gray zones ---
http://faculty.trinity.edu/rjensen/FraudReporting.htm#DiplomaMill
Jensen Comment
Why not a diploma? Thanks to ACORN, Chester Ludlow was registered to vote in the
2008 election. In all seriousness, proper identification of students is a
problem for legitimate colleges whether the students are onsite or online. My
daughter's first chemistry course at the University of Texas was given in a
lecture hall of 600 students. It would've been very easy for he to have hired a
surrogate to take the entire course in her name or examinations in her name.
I know of an outsourcing case like this from years ago when I was an
undergraduate student, because I got the initial offer to take the course for
$500.
Fake IDs are easy to fabricate today on a computer. Just change the name and
student number on your own ID or change the picture and put the fake ID in
laminated plastic.
Online there are ways authenticate honesty online. One way is to have a
respected person sign an attestation form. In 19th Century England the Village
Vicar signed off on submissions of correspondence course takers. There are also
a lot of
Sylvan Centers throughout the U.S. that will administer examinations.
To comply with the newly reauthorized
Higher Education Act, colleges have to verify the
identity of each of their online students.
Several tools can help them do that, including the
Securexam Remote Proctor, which scans fingerprints and captures a 360-degree
view around students, and Kryterion’s Webassessor, which lets human proctors
watch students on Web cameras and listen to their keystrokes.
Now colleges have a new option to show the
government that they’ll catch cheating in distance education. Acxiom
Corporation and Moodlerooms announced this month that they have integrated
the former’s identity-verification system, called FactCheck-X, into the
latter’s free, open-source course-management system, known as Moodle.
“The need to know that the student taking a test
online is in fact the actual one enrolled in the class continues to be a
concern for all distance-education programs,” Martin Knott, chief executive
of Moodlerooms, said in a
written statement.
FactCheck-X, which authenticates many
online-banking transactions, requires test takers to answer detailed,
personal “challenge” questions. The information comes from a variety of
databases, and the company uses it to ask for old addresses, for example, or
previous employers.
The new tool requires no hardware and operates
within the Moodle environment. Colleges themselves control how frequently
students are asked to verify their identities, Acxiom says, and because
institutions don’t have to release information about students, the system
fully complies with the Family Educational Rights and Privacy Act.
Comments
Linebacker's Wife Says She Wrote His Papers
(and took two online courses for him)
The wife of a star University of South Florida
linebacker says she wrote his academic papers and took two online classes for
him. The accusations against Ben Moffitt, who had been promoted by the
university to the news media as a family man, were made in e-mail messages to
The Tampa Tribune, and followed Mr. Moffitt’s filing for divorce. Mr. Moffitt
called the accusations “hearsay,” and a university spokesman said the matter was
a “domestic issue.” If it is found that Mr. Moffitt committed academic fraud,
the newspaper reported, the university could be subject to an NCAA
investigation.
"Linebacker's Wife Says She Wrote His Papers," Chronicle of Higher Education
News Blog, January 5, 2008 ---
http://chronicle.com/news/article/3707/linebackers-wife-says-she-wrote-his-papers?at
Jensen Comment
If Florida investigates this and discovers it was true, I wonder if Moffitt's
diploma will be revoked. Somehow I doubt it.
Ideas for online testing and other types of assessment are at
http://faculty.trinity.edu/rjensen/assess.htm#OnlineOffCampus
Also see the helpers for assessment in general at
http://faculty.trinity.edu/rjensen/assess.htm
Bob Jensen's threads on cheating are at
http://faculty.trinity.edu/rjensen/Plagiarism.htm
Question
What's the value of watching somebody send you an email message?
Answer
There may be some security and subtle communication advantages, but there's a
huge cost-benefit consideration. Is it worth valuable bandwidth costs to
transmit all that video of talking heads and hands? I certainly hope that most
of us do not jump into this technology "head" (get it?) first.
One huge possible benefits might be in distance
education. If a student in sending back test answers via email, it could add a
lot to the integrity of the testing process to watch the student over this new
video and audio channel from Google.
"Google juices up Gmail with video channel," MIT's Technology Review,
November 11, 2008 ---
http://www.technologyreview.com/wire/21665/?nlid=1507&a=f
Google Inc. is introducing new tools that will
convert its free e-mail service into a video and audio channel for people
who want to see and hear each other while they communicate.
Activating the features, introduced Tuesday, will
require a free piece of software as well as a Webcam, which are becoming
more commonplace as computer manufacturers embed video equipment into
laptops.
Once the additional software is installed, Gmail
users will be given the option to see and hear each other without leaving
the e-mail application.
The video feature will work only if all the
participants have Gmail accounts. It's supposed to be compatible with
computers running the Windows operating system or Apple Inc.'s Mac
computers.
Google, the Internet's search leader, has been
adding more bells and whistles to Gmail as part of its effort to gain ground
on the longtime leaders in free e-mail, Yahoo Inc. and Microsoft Corp.
Video chatting has long been available through the
instant messaging services offered by Yahoo and Microsoft, but the feature
isn't available in their free e-mail applications.
Although Mountain View, Calif.-based Google has
been making strides since it began welcoming all comers to Gmail early last
year, it remains a distant third with nearly 113 million worldwide users
through September -- a 34 percent increase from the previous year, according
to comScore Inc.
Microsoft's e-mail services boasted 283 million
worldwide users, up 13 percent from the previous year, while Yahoo was a
close second at 274 million, an 8 percent gain, comScore said.
Ideas for online testing and other types of assessment are at
http://faculty.trinity.edu/rjensen/assess.htm#OnlineOffCampus
Also see the helpers for assessment in general at
http://faculty.trinity.edu/rjensen/assess.htm
Special considerations for detection and prevention of online cheating ---
http://faculty.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline
Also see helpers for detection and prevention of cheating in general at
http://faculty.trinity.edu/rjensen/plagiarism.htm
"Ways to prevent cheating on online exams," by Gail E. Krovitz,
eCollege Newsletter, Vol 8, Issue 6 November 15, 2007 ---
http://www.ecollege.com/Educators_Voice.learn
- Write
every exam as if it is open book. As much as we try to
convince ourselves otherwise, we need to assume that students
use resources on their exams (the book, Internet search engines
and so on) and write our exams accordingly. Are all of our
questions asking for information that can be gathered quickly
from the textbook or from a simple Internet search? Then we
should re-think our questions (see following guideline).
Open-book exams have the potential to test higher level thinking
skills, instead of just memorizing facts. Unfortunately, scores
on open-book exams are often lower, as students don’t take exam
preparation as seriously when they know they can use their book,
so training in open-book exam-taking skills would be helpful
(Rakes).
- Write
effective multiple-choice exam questions. Because it is
so easy to use prohibited materials during online exams, it is
foolish to design tests that simply test factual information
that is easily looked up. Although it is difficult to do, online
exams are most effective when they test higher order thinking
skills (application, synthesis and evaluation) and ask questions
that cannot be answered by glancing at the book or a quick
internet search. See Christe, Dewey and Rohrer for more
information about developing quality multiple-choice questions.
- Set
tight time limits per question. Even with open book
exams (and especially for ones that are not open book), it is
important to give a tight time frame for the test, so students
will not have time to look up each question in the book. The
time limit chosen will obviously vary depending on subject
matter, type of questions asked, etc. For strict fact recall,
instructors might start by giving a total time based on allowing
60- 90 seconds per question and then adjusting as necessary
based on their student body. More time would need to be given
for higher-level thinking questions or for those involving
calculations.
- Use
large question pools to offer different, randomly-selected
questions to each student. See “Tip: getting the most
out of exam question pools” for a good description of using
question pools in the eCollege system. The question pools must
be large enough to minimize overlap of questions between tests.
Rowe provides a chart comparing the average number of questions
in common for two students with different question pool sizes
and different numbers of questions drawn from the pool. For
example, 5 questions drawn from a pool of 10 questions results
in 2.5 questions in common between two students, while 5
questions drawn from a pool of 25 questions results in only 1
question in common between two students. You can consult the
mathematical formula or go with common sense: a larger question
pool is better for reducing the likelihood that students will
get the same questions.
-
Manually create different versions of the exam with the same
general question pools, but with scrambled answers for each
question. For example, in one version of the exam, the
correct answer could be B, while the answer choices are
scrambled in the other version so the correct answer is D. You
could use the Group function to assign half of the class to one
exam, and the other half the class to the other one. Cizek cites
research showing that scrambling questions and answer choices
does reduce cheating, while simply changing the order of the
same questions does not reduce cheating. In fact, in a study of
student’s perceived effectiveness of cheating prevention
strategies, having scrambled test forms was the number one
factor perceived by students to prevent cheating (Cizek).
- Assign
a greater number of smaller tests instead of one or two large
ones. This reduces the incentive to cheat, as each test
isn’t as likely to make or break a student’s grade; the pressure
of the midterm and final-only structure in some classes is a
strong incentive to cheat on those exams. Also, this increases
the logistical difficulties of cheating if a student is relying
on someone else to help them or to take the test for them.
- Provide
a clear policy for what happens if students cheat… and enforce
it! There are many important things instructors can do
from this perspective, such as discussing what constitutes
cheating, the importance of academic honesty, any honor codes in
place, what measures will be in place to prevent and detect
cheating and the punishments for cheating. If students perceive
that the instructor does not care about cheating, then incidents
of both spontaneous and planned cheating increase (Cizek).
Students know that most cheaters don’t get caught and that
punishments aren’t harsh for those who do get caught (Kleiner
and Lord). Research has found that punishment for cheating is
one of the main deterrents to cheating (Kleiner and Lord).
- Set the
exam Gradebook Review Date for after the exam has closed.
The Gradebook Review Date is when the students can access their
graded exam in the Gradebook. If this date is set before the end
of the exam, students who take the exam early could access their
exam in the Gradebook (and usually the correct answers as well)
and distribute the questions to students who would take the exam
later.
- Revise
tests every term. Sooner or later exam questions are
likely to get out into the student world and get distributed
between students. This is especially possible when students view
their graded exams in the Gradebook, as they have all the time
in the world to copy or print their questions (usually with the
correct answers provided). Periodic changes to the test bank can
help minimize the impact of this. Minor changes such as
rewording the questions and changing the order of answers
(especially if different versions with scrambled answers are not
used) can help extend the useful life of a test bank.
- Use
ExamGuardTM if the feature is available at
your school. ExamGuard prohibits the following actions while
students are taking online exams: printing, copying and pasting
anything into or from the assessment, surfing the Web, opening
or using other applications, using Windows system keys functions
or clicking on any other area within the course. Also note that
ExamGuard prohibits students from printing or copying exam
materials while viewing the exam in the Gradebook. If you are
interested in learning more about ExamGuard, please contact your
Account Executive or Client Services Consultant.
- Give
proctored exams in a traditional classroom. While this
is not an option for many online courses, it is a route that
some schools take, especially if they largely serve a local
population. With proctored exams, instructors feel more in
control of the testing environment and more able to combat
cheating in a familiar classroom setting (or at least to have
cheating levels on par with those seen in a traditional exam
setting). In a study on cheating in math or fact-based courses,
Trenholm concludes that proctoring is “the single greatest tool
we presently have to uphold the integrity of the educational
process in instruction in online MFB (math or fact based)
courses” (p. 297). Also, Cizek showed that attentive proctoring
reduced cheating directly and by giving the impression that
academic integrity is valued.
Bob Jensen's threads on cheating are at
http://faculty.trinity.edu/rjensen/Plagiarism.htm
From Syllabus News on December 9, 2003
MIT Sloan Professor: Use Tech to Reinvent Business
Processes
Many private companies are using technology to keep
down their labor costs, but the key to sustained growth and revived employment
lies in whether they will successfully use technology to redesign the basic
way they operate, says MIT Sloan Prof. Erik Brynjolfsson, director of the
Center for eBusiness at MIT Sloan.
In his research, Brynjolfsson found widely different
outcomes among companies that spent similar amounts on technology, the
difference being in what managers did once the new tech was in place.
"Some companies only go part way," said Brynjolfsson, an expert on
information technologies and productivity. "They use technology to
automate this function or to eliminate that job. But the most productive and
highly valued companies do more than just take the hardware out of the box.
They use IT to reinvent their business processes from top to bottom. Managers
who sit back and assume that gains will come from technology alone are setting
themselves up for failure."
Bob Jensen's related threads are at the following URLs:
Management and costs --- http://faculty.trinity.edu/rjensen/distcost.htm
May 5, 2005 message from Carolyn Kotlas
[kotlas@email.unc.edu]
NEW E-JOURNAL ON LEARNING AND EVALUATION
STUDIES IN LEARNING, EVALUATION, INNOVATION AND
DEVELOPMENT is a new peer-reviewed electronic journal that "supports
emerging scholars and the development of evidence-based practice and that
publishes research and scholarship about teaching and learning in formal,
semi-formal and informal educational settings and sites." Papers in the
current issue include:
"Can Students Improve Performance by Clicking More?
Engaging Students Through Online Delivery" by Jenny Kofoed
"Managing Learner Interactivity: A Precursor to
Knowledge Exchange" by Ken Purnell, Jim Callan, Greg Whymark and Anna
Gralton
"Online Learning Predicates Teamwork: Collaboration
Underscores Student Engagement" by Greg Whymark, Jim Callan and Ken Purnell
Studies in Learning, Evaluation, Innovation and
Development [ISSN 1832-2050] will be published at least once a year by the
LEID (Learning, Evaluation, Innovation and Development) Centre, Division of
Teaching and Learning Services, Central Queensland University, Rockhampton,
Queensland 4702 Australia. For more information contact: Patrick Danaher,
tel: +61-7-49306417; email: p.danaher@cqu.edu.au. Current and back issues
are available at
http://www.sleid.cqu.edu.au/index.php .
Important Distance
Education Site
The Sloan Consortium --- http://www.aln.org/
The purpose of the Sloan Consortium (Sloan-C) is to help learning organizations
continually improve quality, scale, and breadth according to their own
distinctive missions, so that education will become a part of everyday life,
accessible and affordable for anyone, anywhere, at any time, in a wide variety
of disciplines.
Salem-Keizer Online, or S.K.O., is one in a growing number of public, private
and charter schools available to kids who are looking for an alternative to a
traditional education. Commonly called ''virtual school,'' it's a way of
attending school at home without the hovering claustrophobia of home-schooling.
"School Away From School," by Emily White, The New York Times,
December 7, 2003 --- http://www.nytimes.com/2003/12/07/magazine/07CYBER.html
Virtual school seems like an ideal choice for kids
who don't fit in or can't cope. ''I'm a nervous, strung-out sort of person,''
says Erin Bryan, who attends the online Oregon-based CoolSchool. Erin used to
attend public school in Hood River, Ore., but ''I didn't like the
environment,'' she says. ''I am afraid of public speaking, and I would get
really freaked out in the mornings.''
Kyle Drew, 16, a junior at S.K.O., says: ''I couldn't
get it together. I was skipping more and more classes, until I was afraid to
go to school.'' Leavitt Wells, 13, from Las Vegas, was an ostracized girl with
revenge on her mind. ''The other kids didn't want anything to do with me,''
she says. ''I'd put exploded gel pens in their drawers.'' Now she attends the
Las Vegas Odyssey Charter School online during the day, and when her
adrenaline starts pumping, she charges out into the backyard and jumps on the
trampoline.
On S.K.O.'s Web site, students can enter a classroom
without being noticed by their classmates by clicking the ''make yourself
invisible'' icon -- a good description of what these kids are actually doing.
Before the Internet, they would have had little choice but to muddle through.
Now they have disappeared from the school building altogether, a new breed of
outsider, loners for the wired age.
Douglas Koch is only 12, but he is already a
high-school sophomore. He says that he hopes to graduate by the time he's 15.
Today he sits at his computer in his Phoenix living room -- high ceilings and
white walls, a sudden hard rain stirring up a desire to look out the shuttered
windows. Douglas's 10-year-old brother, Gregory, is stationed across the room
from him -- he is also a grade-jumper. The Koch brothers have been students at
the private Christa McAuliffe Academy, an online school, for more than a year
now. While S.K.O. is a public school, C.M.A. is private, charging $250 a month
and reaching kids from all over the country. From Yakima, Wash., it serves 325
students, most of whom attend classes year-round, and employs 27 teachers and
other staff members.
The first section of this article is not quoted here.
For those of you who think distance
education is going downhill, think again. The number of students switching
from traditional brick-and- mortar classrooms to full-time virtual schools in
Colorado has soared over the past five years…
"Online Ed Puts Schools in a
Bind: Districts Lose Students, Funding," by Karen Rouse, Denver
Post, December 2, 2004 --- http://www.denverpost.com/Stories/0,1413,36%257E53%257E2522702,00.html
The number of
students switching from traditional brick-and- mortar classrooms to full-time
virtual schools in Colorado has soared over the past five years.
During the 2000-01
school year, the state spent $1.08 million to educate 166 full-time
cyberschool students, according to the Colorado Department of Education. This
year, the state projects spending $23.9 million to educate 4,237 students in
kindergarten through 12th grade, state figures show.
And those figures -
which do not include students who are taking one or two online courses to
supplement their classroom education - are making officials in the state's
smallest districts jittery.
Students who leave
physical public schools for online schools take their share of state funding
with them.
"If I lose two
kids, that's $20,000 walking out the door," said Dave Grosche,
superintendent of the Edison 54JT School District.
Continued in the article
December 3, 2004 Reply from Steve Doster
[sdoster@SHAWNEE.EDU]
Are there any internal controls that would discourage
an unethical distance learning student from simply hiring another to complete
his distance learning assignments and essentially buying his grade?
Steve
December 3, 2004 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU]
In the graduate accounting distance learning classes
at UConn, the students work in groups in chat rooms. Students are graded on
participation in these groups (by the other students in my classes). They meet
each other in a one-week in-residence session at the beginning of the MSA
program. If a student hired another student in his/her place, that
impersonator would have to follow through on group work, which isn’t likely.
I taught 76 students this past summer, and perhaps I am naïve, but I would be
surprised if I had any impersonators. Working with the students through
instant messenger and in chat rooms really creates strong relationships, and I
think I could detect impersonators quickly. In fact, a sibling of a student
logged on using his brother’s AIM login, and after two sentences, I asked
who was on the other end. The brother admitted who he was. It’s harder to
fake than you might think. All that said, I really am not all that concerned
about online cheating. These courses are expensive, and if a student really
wants to cheat, s/he can do it, whether the course is FTF or distance. I do
not see myself as a monitor for graduate students. My attitude would be much
different for undergrads, but I think that grads are far more goal oriented,
and cheating is less of a concern.
December 3, 2004 reply from Bruce Lubich
[blubich@UMUC.EDU]
I would echo what Amy has said. At University of
Maryland University College, our online courses are taught in asynchronous
mode. It doesn't take long to learn the student's communication styles. When
that changes, it stands out like a sore thumb. Of course, there are times when
a student will submit someone else's work. I've had other students turn those
students in. Whether I catch them or a student turns them in, it's handled
very quickly and strictly. Students know the implications for cheating are
very harsh. Having said all that, the other element is the students
themselves. We deal with adult graduate students who have work experience and
goals in mind. They are smart enough to know that they only cheat themselves
from learning and reaching their objectives when they cheat. Does that sound
ideal and naive? Maybe. But I've had many students say that to me. Mature
students are not stupid.
I would also point out that when comparing 20 years
of teaching in f2f classrooms, I have not experienced an increase in cheating.
Let's face it. Students who want to cheat will find a way. Does it really
matter whether they're online if all they have to do is use their camera phone
to send a picture of the test answers to someone on the other side of the
room?
I understand the skepticism and concern about
cheating in the online environment. But as more and more of you move into that
environment, you'll discover that the concern is no more than what exists in
the f2f environment.
December 3, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU]
Steve,
Depends on how you define distance education.
At JMU's on-line MBA infosec program, we require an
in- person meeting at the beginning and again at the end of each course.
Everyone has to fly into Washington Dulles and meet at the Sheraton in Herndon
every 8 weeks during the 2-year program. Friday afternoon and Saturday is the
wrap-up of the previous course, and Saturday evening and Sunday is the start
of the new course.
In between the in-person meetings, students meet
weekly or twice-weekly on-line (synchronous) using Centra Symposium,
supplemented by Blackboard-based resources, plus Tegrity recorded lectures and
presentations.
During the very first in-person meeting, we take
pictures of every student, mainly to help the professors put a face with the
name before the courses begin. During the Saturday- afternoon-Sunday meeting
at the start of a course, the instructor gets to know the students personally,
putting faces with names and voices. Then, for the following eight weeks while
on-line, the professor has a pretty good handle of who he's interacting with.
I believe it would be fairly easy for me to spot a
phony on- line, not only by voice, but also attitudes, approaches, beliefs,
experiences, and backgrounds. Our program is very interactive in real time,
requires significant group work, and other inter-personal activities.
Then, at the end of the eight weeks, the students get
back together for a Friday-afteroon-Saturday morning session with the
professor for the final examination, case presentations, etc. Again, I would
be able to easily recognize someone outside the class based on my 45 hours of
interaction with them over the previous 8 weeks. It would be obvious if a
student's level of knowledge and understand, energy, motivation, attitudes,
opinions, reasoning and logic etc. were atypical of that student's experience
with me in class.
So in our case, the in-person meeting requirement
every 8 weeks serves, we believe, as sufficient internal control to prevent
the substitution from going undetected.
I'm interested in other experiences and opinions.
David Fordham
James Madison University
December 3, 2004 reply from Barbara Scofield
[scofield@GSM.UDALLAS.EDU]
As a member of the UT System MBA Online Academic
Affairs Committee from 1998-2004, I watched new online faculty and instructors
deal with the issue of how do you know who is doing the work over and over
again new classes were added and board members rotated. The program was
explicitly set up to require no synchronous communications and no proctored
exams. (As the courses developed, at least one course did come to require
synchronous communciation, but students were given wide lattitude to schedule
their hearings in business law -- and the instructor grew to regret his choice
of methodology as the enrollment increased.)
The control for unethical online students is
basically that it is too much work if the online class includes regular
interactions with both the instructor and other students. If an online
instructor has regular interactions with his or her students, then the
instructor has the usual information to evaluate whether a particular paper or
test answer is written by the student or by a proxy. Some online students
complain about "busy work" that involves reading, researching, and
responding to narrative materials online as part of the "lecture"
component of a class -- and online faculty find it time consuming to provide
such interactivity with course content. But in my mind this type of material
in an online course is the very "control" you are asking about.
Barbara W. Scofield, PhD, CPA
Associate Professor of Accounting
University of Dallas |1845 E. Northgate
Irving, TX 75062
December 3, 2004 reply from Chuck Pier [texcap@HOTMAIL.COM]
Barbara I think your explanation of the controls is
exactly what I have experienced. I have not taken an online course, or even
taught one, but my wife completed her entire MS in Library Science online
through North Texas. My observations from watching her were that the amount of
work and asynchronus communication required were significant. The course
required extensive reading and would be expensive to pay someone else to do
the wrok for the student, although I am sure that it has been done, and will
be done in the future. I know that my wife worked a lot more in this online
environment than she did in the traditional classroom, and I felt thatmost of
the work was an attempt to validate the lack of traditional testing, even in
the online format.
This might also explain Laurie's comment about the
virtual experience being more satisfying than the traditional courses. Based
on my wife's experience and Barbara's comments I would think that the amount
of work also creates a sense of "ownership" in an online students
experience.
However, based on the amount of work required, I know
that these programs are not for everyone. You have to be mature and dedicated
to put in the time required to be succesful. Based on what I see in my
classroom, I am not worried about on-line education supplanting me my
colleagues anytime in the future.
Chuck
December 3, 2004 reply from Patricia Doherty [pdoherty@BU.EDU]
I co-teach in a distance-learning program for Seton
Hall, and echo what others have said. We have threaded discussions of cases
online, and the students are also members of teams, with a separate thread for
each team to discuss the week's written (team) assignment. They really do have
"online personalities," and those are revealed to everyone in the
class, after the first week of these dual discussions, not just to the
instructors, but to the other students, so I think an imposter, unless they
actually did the course from start to finish "as someone," would
quickly be noticed.
We see the thought process they go through as they
formulate assignments - they even upload preliminary work as they progress.
So, a final version completely different from the preliminary would, again, be
noticed. And each team works on the assignment together, with one person -
sometimes a different person each week - delegated to submit the final
version. Again, that's hard to cheat on. The final assignment is individual,
and I think we'd notice immediately if the work were very different from what
we have seen of a person for an entire course. That said, anyone motivated
enough to cheat could find a way. The question is whether we want to waste our
time devising ever more complicated schemes to thwart each new cheating plan,
making the courses less pleasant for the students who don't cheat, as well as
for the teachers. or whether we prefer to spend the time making the course as
rich and productive and useful, and as close to a face-to-face experience, as
we can.
p
December 3, 2004 reply from Charlie Betts [cbetts@COLLEGE.DTCC.EDU]
Hi Steve,
I doubt that there are any 100% controls to prevent
cheating in online courses, just as there are no 100% accounting control to
prevent fraud throught collusion, but there are controls that can at least
minimize the possibility that cheating will occur.
I agree with the comments of Amy and the other
respondents to your question, and I would feel comfortable with what they are
doing in their courses if I were teaching those graduate level courses. But
I'm a teacher in a community college and one of the online courses that I
teach on a regular basis is the first principles course. Over fifty percent of
my students in a typical class are not accounting majors and are taking the
course only because it's a requirement for graduation in their major. There's
also usually a small precentage of students from other colleges and
universities in the classes although for the summer session this percentage is
often quite large. Given those circumstances, I feel that I have to have more
safequards in place to ensure that the work I receive from students is their
own.
The primary control that I use is a requirement that
three of the six tests in the course must be proctored. This is not a problem
with our own students since each of my college's (Delaware Tech) four campuses
have testing centers that are open in the evenings and on weekends. All my
tests are online, but I "password protect" the proctored tests. For
each proctored test, I email each testing center a list of the students who
will be taking the test, the password, and any special testing instructions.
The testing centers check the students picture ID before they are admitted to
the testing center.
Part of each students grade is a project somewhat
similar to a traditional practice set, which I have modified so that it can be
completed on Excel worksheets, which I provide. When the student has completed
this work, I require them to take what I call an "audit" test on
their work. This is a short test that asks them simply to look up certain
figures from their completed work and to repeat certain calculations they had
to make. This audit test must also be proctored. The audit test is a simple
test for someone who has done their own work, but would be very difficult for
someone to pass who had "hired" someone to do their work for them.
For students who are unable to take the proctored
tests at one of our testing centers, I require them to provide a proctor whom
I must approve. Since most schools have testing centers of some sort this is
usually done through their school's testing center. Other proctors that
students have provided have professor's at their school, school libraries,
ministers, local CPA's etc. For one student who started the course as a local
student and finished it on temporary duty in Iraq, the proctor was the
student's company commander. The student is responsible for providing the
proctor and the proctor must establish their identity in some why, usually by
a letter to me on their school/company letterhead.
I've compared the scores from both the proctored and
unproctored tests in my online courses with the scores of identical tests
given in face-to-face courses and there is no significant difference, although
the proctored online scores do tend to be slightly higher, a difference I
attribute to the slightly better quality of student I find in the online
classes.
I know this seems like a cumbersome system - I
sometimes think it is myself - but for a beginning principles course I feel
that these or similar safeguards are necessary, and in practice it really
works much smoother than it would seem from my description.
I've really only had one problem and that occurred
last summer. It involved a student at a university in a neighboring state,
which I won't name because I hold the university in much higher regard than I
do this particular student. After numerous emails which complained in a highly
ungrammatical manner that the proctored tests were unfair and gave innumerable
reasons why he should be exempt from this requirement, all of which were
naturally rejected, I received an email from someone purporting to be be an
employee in the school's library and offering to be a proctor for that
student's test. Since the email was written in the same ungrammatical style as
the student's prior emails, I didn't have to possess the acumen of a Shelock
Holmes to be suspicious. But just to be sure I went to the school's web site,
located the name and phone number of the libarian, and called her to
"verify" the prospective proctor's employment. It was not really a
surprise that the librarian had never heard of her "employee." I
then emailed the "proctor" to express my surprise that the librarian
had no idea who the "proctor" was. This email was shortly followed
by another email from the student informing me that he was dropping the
course. So even this tale had a happy ending.
Charlie Betts
-----------------------------------------------------------
It's not so much what folks don't know that causes problems. It's what they do
know that ain't so. - Artemus Ward
Charles M. Betts DTCC,
Terry Campus
100 Campus Drive Dover DE 19904
cbetts@college.dtcc.edu
Bob Jensen's threads on distance
education are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
November 1, 2003 message from Douglas Ziegenfuss [dziegenf@ODU.EDU]
The GAO published a report "Measuring
Performance and Demonstrating Results of Information Technology
Investments" publication # GAO/AIMD-98-89.
You can retrieve this report from the GAO website at www.gao.gov
and look under reports. Hope this helps.
Douglas E. Ziegenfuss
Professor and Chair,
Department of Accounting
Room 2157 Constant Hall
Old Dominion University Norfolk, Virginia 23529-0229
Distance Education: The Great Debate
From Infobits on March 1, 2002
EVALUATION STRATEGIES FOR DISTANCE
EDUCATION
"The many
factors involved in the success of distance offerings makes the creation of a
comprehensive evaluation plan a complex and daunting task. Unfortunately, what
may seem the most logical approach to determining effectiveness is often
theoretically unsound. For example, comparing student achievement between
distance and face-to-face courses may seem a simple solution, yet the design
is flawed for a number of reasons. However, theoretically sound approaches do
exist for determining the effectiveness of learning systems, along with many
different methods for obtaining answers to the relevant questions." In
"Measuring Success: Evaluation Strategies for Distance Education" (EDUCAUSE
QUARTERLY, vol. 25, no. 1, 2002, pp. 20-26), Virginia Tech faculty Barbara
Lockee, Mike Moore, and John Burton explain the factors to consider when
evaluating distance education (DE) programs. Sharing the experience gained
from DE evaluations at Virginia Tech, they provide guidance to readers who
want to set up evaluation plans at their institutions. The article is
available online (in PDF format) at http://www.educause.edu/ir/library/pdf/eqm0213.pdf
The link to the Lockee et al. paper is at http://www.educause.edu/ir/library/pdf/eqm0213.pdf
Bob Jensen's threads on assessment
are at http://faculty.trinity.edu/rjensen/assess.htm
From EDUCAUSE at http://www.educause.edu/
ACE-EDUCAUSE
distance learning monograph published
The American Council on Education (ACE) and EDUCAUSE have just published
the second monograph in a series on distributed education. Maintaining
the Delicate Balance: Distance Learning, Higher Education Accreditation,
and the Politics of Self-Regulation, by Judith S. Eaton, President
of the Commission for Higher Education Accreditation, can be accessed in
PDF format or purchased from ACE. http://www.educause.edu/asp/doclib/abstract.asp?ID=EAF1002
Abstract
Maintaining the Delicate Balance: Distance Learning, Higher Education
Accreditation, and the Politics of Self-Regulation is the second
monograph in a series of papers on distributed education commissioned by
the American Council on Education (ACE) and EDUCAUSE. It describes the
impact of distance learning on the balance among accreditation (to
assure quality in higher education), institutional self-regulation, and
the availability of federal money to colleges and universities. The
paper confronts the challenges of protecting students and the public
from poor-quality higher education, and attending to quality in an
increasingly internationalized higher education marketplace.
|
View
HEBCA proof-of-concept video
Visit the EDUCAUSE Information Resources Library to view the video that
was shown at a recent demonstration of the Higher Education Bridge
Certification Authority (HEBCA), the Federal Bridge, and the Public Key
Interoperability project. Read the press
release describing the proof-of-concept event.
|
NSF
releases latest HPNC announcement
In a recently released High Performance Network Connections for Science
and Engineering Research (HPNC) announcement, the NSF encourages U.S.
institutions of higher education and institutions with significant
research and education missions to establish high-performance (at or
above 45 megabits per second) Internet connections where necessary to
facilitate cutting edge science and engineering research. View the announcement
and instructions for proposal submission.
|
Hi Kevin,
Thank you for the message below.
My concern with John Sanford's report is that critics of distance education
often have never tried it. Or even if they have tried it, they have never
tried it with the instant message intensity of an Amy Dunbar --- http://faculty.trinity.edu/rjensen/book01q3.htm#Dunbar
I just do not think the armchair
critics really appreciate how the Dunbar-type instant messaging pedagogy can get
inside the heads of students online.
But I think it is safe to day that the
Sanford-type critics will never have the motivation and enthusiasm to carry off
the Dunbar-type instant messaging pedagogy. For them and many of us
(actually I'm almost certain that I could not pull off what Dr. Dunbar
accomplishes), it is perhaps more "suicidal" for students.
I also think that success of distance
education depends heavily upon subject matter as well as instructor
enthusiasm. But I think there is only a small subset of courses that
cannot be carried off well online by a professor as motivated as Dr. Dunbar.
I am truly grateful that I was able to
persuade Professor Dunbar and distance education expert from Duke
University to present an all-day workshop in the Marriott Rivercenter Hotel on
August 13, 2002. If our workshop proposal is accepted by the AAA, this is
an open invitation to attend. Details will soon be available under "CPE"
at http://accounting.rutgers.edu/raw/aaa/2002annual/meetinginfo.htm
I wish John Sanford would be there to watch the show.
Thanks for helping me stay
informed! Other views on the dark side are summarized at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Bob Jensen
Bob,
Since I know you track information technology WRT education, I thought you might
be interested in this. The original source is the "Stanford Report"
cited below: TP is a listserv that redistributed it.
Kevin
Folks:
The article below
presents an interesting take on the limitations of technology, teaching, and
learning. It is from the Stanford Report, February 11, 2002 http://www.stanford.edu/dept/news/report/
. Reprinted with permission.
Regards,
Rick Reis reis@stanford.edu
UP NEXT: Book Proposal Guidelines
HIGH-TECH TEACHING
COULD BE "SUICIDAL"
BY JOHN SANFORD
University educators
largely extol the wonders of teaching through technology. But skeptics
question whether something is lost when professors and lecturers rely too
heavily on electronic media, or when interaction with students takes place
remotely -- in cyberspace rather than the real space of the classroom.
Hans Ulrich Gumbrecht,
the Albert Guerard Professor of Literature, is one such skeptic. "I think
this enthusiastic and sometimes naïve and sometimes blind pushing toward the
more technology the better, the more websites the better teacher and so forth,
is very dangerous -- [that it] is, indeed, suicidal," Gumbrecht said,
speaking at the Jan. 31 installment of the Center for Teaching and Learning's
"Award-WinningTeachers on Teaching" series.
But Gumbrecht
cautioned that there are few, if any, studies either supporting or rejecting
the hypothesis that traditional pedagogy is superior to teaching via the
Internet or with a host of high-tech classroom aids. "If [such studies]
exist, I think we need more of them," he said.
He added that he
could point only to his "intuition that real classroom presence should be
maintained and is very, very important," and emphasized the need for
educators to critically examine where technology serves a useful pedagogical
function and where it doesn't.
However, Gumbrecht
allowed that, for courses in which knowledge transmission is the sole purpose,
electronic media probably can do the job well enough. Indeed, given the 20th
century's knowledge explosion and the increasing costs of higher education,
using technology as opposed to real-life teachers for the transmission of
information is probably inevitable, he said.
In any case,
knowledge transmission should not be the core function of the university, he
added, noting that the Prussian statesman and university founder Wilhelm von
Humboldt, sociologist Max Weber and Cardinal John Henry Newman all held that
universities should be places where people confront "open
questions."
"Humboldt even
goes so far to say -- and I full-heartedly agree with him -- they should
ideally be questions without a possible answer," Gumbrecht said. He
asserted the university should be a place for "intellectual
complexification" and "riskful thinking."
"We are not
about finding or transmitting solutions; we are not about recipes; we are not
about making intellectual life easy," he continued. "Confrontation
with complexity is what expands your mind. It is something like intellectual
gymnastics. And this is what makes you a viable member of the society."
Paradoxically,
"virtual" teacher-student interaction that draws out this kind of
thinking probably would be much costlier for the university than real-time,
in-class teaching, Gumbrecht said. The reason for this, he suggested, is that
responding to e-mail from students and monitoring their discussion online
would require more time -- time for which the university would have to pay the
teacher -- than simply meeting with the students as a group once or twice a
week.
In addition,
Gumbrecht asserted that discussions in the physical presence of others can
lead to intellectual innovation. He recalled a Heidegger conference he
attended at Stanford about a year ago, where he said he participated in some
of the best academic discussions of his career. Heidegger himself "tries
to de-emphasize thinking as something we, as subjects, perform,"
Gumbrecht said. "He says thinking is having the composure of letting
thought fall into place." Gumbrecht suggested something similar happens
during live, in-person discussions.
"There's a
qualitative change, and you don't quite know how it happens," he said.
"Discussions in the physical presence have the capacity of being the
catalyst for such intellectual breakthroughs. The possibility of in-classroom
teaching -- of letting something happen which cannot happen if you teach by
the transmission of information -- is a strength."
Gumbrecht argued that
the way in which students react to the physical presence of one another in the
classroom, as well as to the physical presence of their professor, can
invigorate in-class discussions. "I know this is problematic territory,
but I think both the positive and negative feelings can set free additional
energy," he said. "I'm not saying the physical presence makes you
intellectually better, but it produces certain energy which is good for
intellectual production."
Asked to comment on
some of the ideas Gumbrecht discussed in his lecture, Decker Walker, a
professor of education who studies technology in teaching and learning, agreed
that pedagogy via electronic media may work best in cases where information
transmission is the goal -- for example, in a calculus course. In areas such
as the humanities and arts, it may be a less valuable tool, he said.
In any case, the
physical presence of teachers can serve to motivate students, Walker said.
"I think young people are inspired more often by seeing other people who
are older -- or even the same age -- who do remarkable things," he said.
"It would be hard to replace this with a computer."
On the other hand,
Walker maintained that computer technology can be a useful educational aid.
One such benefit is access to scholars who are far away. "Technology can
enable a conversation, albeit an attenuated online one, with distant experts
who bring unique educational benefits, such as an expert on current research
on a fast-moving scientific topic," Walker said. "This may greatly
enrich a live class discussion with a local professor."
Walker maintained
that the university environment is not in danger of being supplanted by
technology. On the contrary, he noted, large businesses have adopted aspects
of the university environment for their employees' professional education. For
example, General Motors started GM University, whose main campus is at the
company's new global headquarters in Detroit's Renaissance Center.
Museums also function
in some ways like universities, he noted. For example, the Smithsonian
Institution has numerous research, museum and zoo education departments
And for all the
emphasis high-tech companies put on developing devices and software for remote
communication, many have had large campuses constructed where workers are
centralized -- a nod, perhaps, to the importance of person-to-person
interaction.
Rick Reis, executive
director of Stanford's Alliance for Innovative Manufacturing and associate
director of the Learning Lab's Global Learning Partnerships, noted that the
subject of technology in education covers a lot of territory. Few people, for
example, are likely to argue that making students trudge over to the library's
reserve desk to get a piece of reading material for a course, or making
hundreds of hard copies, is preferable to posting it on the web, Reis said.
But he added that whether the kind of teaching generally reserved for a
seminar could be as effective online is an open question.
Reply from Amy Dunbar [ADunbar@SBA.UCONN.EDU]
George,
you wondered about the following Sanford statement:
>"paradoxically "virtual" teacher-student interaction
that
> draws out this kind of thinking probably would be much costlier for
the
> university than > real-time, in class-teaching...responding to
e-mail
>from students and monitoring their discussion online would require
more
> time--time for which the university would have to pay the
teacher--- than simply
> meeting with the students as a group once or twice a week."
Although I probably do spend more time
"teaching" now that I am online (I teach two graduate accounting
courses: advanced tax topics and tax research), I think the more important
issue for me is "when," not "how much." My students work
full time. They are available at night and on weekends, and they prefer to do
coursework on weekends. Thus, I spend a lot of time at home in front of my
computer with my instant messenger program open. If a student wants to talk,
I'm available during pre-determined times. For a compressed six-week summer
session with two classes and around 60 students, I live online at night and on
weekends. With a regular semester online class, I base my online hours on a
class survey of preferences. Last fall I was online from 7 to 9 or 10 at least
two nights a week, Saturday afternoons, Sunday mornings for the early birds
(an hour or two), and then Sunday evenings from 6 to 10. Sunday evenings were
my busiest times. On the other scheduled days, I generally could do other
easily interruptible tasks while I was online. Frequently a group of students
would call me into a chat room, either on AIM or WebCT. I think that my online
presence takes the place of "the physical presence of teachers [which]
can serve to motivate students." Students log on to AIM, and they see me
online. For my part, I love logging on and seeing my students online. They are
just a click away.
Most of my online students think the burden of
learning has been shifted to them, and I'm just a "guide on the
side." And they are right. Online learning is not for everyone, but as
Patricia Doherty noted, live classroom instruction isn't an option for all
students, particularly students who travel in connection with their work. And
just as not all live classroom instruction encompasses the dynamic
interchanges described by Sanford, not all online courses will either, but I
have certainly been an observer and a participant in spirited exchanges among
students.
As for the comment that the university would have to
pay the teacher for additional time, I'm not sure such time is quantifiable
because I do other things when I am online but no one is "talking"
to me. As a tenure track prof, I'm not sure how that comment would apply in my
case in any event. Perhaps where the extra cost arises is in the area of class
size. Handling more than 30 students in an online class is difficult. Thus,
schools may have to offer more sections of online courses.
__________________________________
GO HUSKIES!!! (BEWARE OF THE DOG)
Amy Dunbar ( mailto:adunbar@sba.uconn.edu
860/486-5138 http://www.sba.uconn.edu/users/ADunbar/TAXHOME.htm
Fax 860-486-4838
University of Connecticut School of Business, Accounting Department
2100 Hillside Road, Unit 1041A Storrs, CT 06269-2041
Reply from Dan Gode, Stern School of Business [dgode@STERN.NYU.EDU]
David Noble has been one of the foremost critics of
distance learning for the last four years. He is widely quoted. I too have
found his articles (at http://communication.ucsd.edu/dl/
) interesting. While discussing them with my colleague today, I could not
avoid noticing the irony that he himself is one of the biggest beneficiaries
of the internet and distance learning.
Many of us would not have "learned" about
his views without the web. He has been able to "teach" his ideas in
the distance learning mode almost free only because of the web. In fact, most
of the critics of distance learning have achieved their fame precisely because
of the knowledge dissemination enabled by the web.
I agree that adoption of distance learning will be
much slower than the expectation of many distance learning companies and
universities but it will be foolhardy to ignore the gradual technological
innovation in education.
A select few in New York can afford the live
entertainment of Broadway, most others are grateful for the distance
entertainment that is available cheaply to them. Distance learning may not
replace classroom learning, but it will surely provide much needed low cost
education to many.
Dan Gode
Stern School of Business
New York University
Note from Bob Jensen: You can read more about David Noble at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Reply from Bob Jensen
Hi Jagdish,
I agree with you to a point. However, I am always suspicious of academics who
see only the negative side of a controversial issue. I'm sorry, but I find David
Noble to be more of a faculty trade union spokesperson than an academic. Much of
his work reads like AAUP diatribe.
Those of you who want to read some of his stuff can to to in my summary of
the dark side of distance education at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
I would have much more respect for David Noble if he tried to achieve a
little more balance in his writings.
Bob (Robert E.) Jensen Jesse H. Jones Distinguished Professor of Business
Trinity University, San Antonio, TX 78212 Voice: (210) 999-7347 Fax: (210)
999-8134 Email: rjensen@trinity.edu
http://www.trinity.edu/rjensen
-----Original Message-----
From: J. S. Gangolly [mailto:gangolly@CSC.ALBANY.EDU]
Sent: Thursday, February 28, 2002 9:37 AM
To: AECM@LISTSERV.LOYOLA.EDU
Subject: Re: The Irony of David Noble and other critics of distance learning
Dan,
Let me play the devil's advocate once again; this
time I do so with a bit of conviction.
Noble's tirade has been against the commoditisation
of instruction and the usurping of what are traditionally regarded as academic
faculty prerogatives by the administrators in their quest for revenues (or
cutting costs). These are real issues, and a knee-jerk reaction does no one
service.
Noble's arguments are based on the actual experiences
at UCLA and York. I suppose if he were to rewrite his pieces today, the list
would be much longer.
Noble's reservations are also based on the distinct
possibility of higher education turning into diploma mills (Reid's
observation: "no classrooms," "faculties are often untrained or
nonexistent," and "the officers are unethical self-seekers whose
qualifications are no better than their offerings.")
I am a great enthusiast for distance learning, but I
think the debate Noble is fostering is a very legitimate one. It will at least
sensitize us all to the perils of enronisation of higher education. Do we need
the cohorts of the likes of Lay and Skilling running the show? What guarantee
do we have that once it is commoditised, a non-academic (with or without
qualifications and appreciation for higher education) will "manage"
it?
I do very strongly feel that distance education has a
bright future, but the Noble-like debates will strengthen it in the long run.
There is a need for the development of alternative pedagogies, etc.
Back in the late 60s, I was working in a paper mill
in the middle of nowhere in India, and I started taking a course in electrical
engineering in the distance mode (we used to call it correspondence courses).
Unfortunately, those days there was no near universal eccess to computers, and
it was not easy. However, it put the burden oif learning on me much more so
than in my usual higher education even at decent schools (including one of the
IIMs). Unfortunately, I had to discontinue it because of pressure of work.
I look at most existing distance learning today as
the model T of education. We need to figure out how we can improve on it, not
take it as a matter of faith.
Jagdish
Reply from Paul Williams [williamsp@COMFS1.COM.NCSU.EDU]
Jagdish point
is well spoken; the issue is the commodification of higher
education (and everything else for that matter). "Efficiency" is not the only value humans cherish.
There is an interesting article
in the last Harper's by Nick Bromell, a professor of English at
UMass Amherst, titled Summa Cum Avaritia. Higher education produces
substantial revenues and a good deal of the discussion about
distance education is really about coopting those revenues (privatizing
education for profit).
Reply from George Lan [glan@UWINDSOR.CA]
Hi Amy,
Thanks for sharing your on-line experience with us.
It shows what flexible learning could achieve. However, those who think that
teaching on-line or a dist. ed course is a walk in the park and that on-line
courses are cash cows will probably think twice. Administrators should ensure
that the classes are not too big so that teh on-line instructor can elicit the
kind of interaction and learning that you mention.
The "psychiatrist", "nurse" or
sometimes the "gladiator" in me prefers personal contact courses but
I do recognize the value of on-line and distance education courses, especially
for those to whom live classroom is not an option, as Pat and you have
mentioned.
You make a critical point when you mention that
"most of my online students think that the burden of learning has been
shifted to them, and I'm just a "guide on the side." " Having
taught some distance education courses in the past, I've noticed that the
drop-out rate seems to be higher in my dist. ed courses (I agree that I have
not used the power of technology and the computer to the fullest before) but
could some of the students find the burden of learning on their own
unbearable? In Canada, the Certified General Accountants have a high quality
on-line delivery of courses for those wishing to pursue the accounting
designation. In the big city centres, the students also have the choice of
attending lectures-- they pay some extra fee (however, all assignments are
submitted on-line, usually on a weekly basis and they are graded and returned
to the student within 7 days- there is an efficient system of markers and
tutors for each course). The onus to learn is on the student and several of
them have to repeat the same course several times (which probably is not
dependent on whether they choose to attend lectures or not). Financially and
time-wise, it can be very costly to the students. But then, as stated by the
economist Spence, education is a signal.
George Lan
Reply from Ross Stevenson [ross.stevenson@AUT.AC.NZ]
Hi (from the South
Pacific) aecmers
I have written heaps
of computer based (first year accounting) stuff that students can:
1 Use at their own
pace in a teaching computer lab (my classroom) and/or 2 Use on their home
computer
When writing the
stuff I had 'distance learning' in mind. However, I and most of my students,
enjoy the flexible computer lab approach during which they can 1 Listen to me
(all stuff projected on large wall screen) or 2 Work at their pace from their
monitor
In my mind, there is
no doubt that a majority of (first year) students prefer the classroom (dare I
say 'non-distance learning') IT approach. Some of my colleagues teach the same
course with no more technology than overhead projectors
I am planning some
research along the following lines
At beginning of
semester, each student completes: 1 An objective profile of themselves (age,
gender, English as their first language? etc.)
2 A subjective
profile of themselves as to what they perceive are their preferred learning
environments (IT based ? classroom? home? etc.)
At end of semester 1
more student feed back as to how they rated my classroom -IT delivery.
PURPOSE OF
RESEARCH
To see if we can survey students at *beginning* of semester and advise them as
to which class (lecturer & delivery style) would probably suit them
I would appreciate
any references to any research similar to above you are aware of.
Regards
Ross Stevenson
Auckland Uni of Technology NZ
Reply from arul.kandasamy@indosuez.co.uk
George,
you asked: could some of the students find the >burden of learning on
their own unbearable?
IMO, online learning isn't for everyone. I suggest a
switch to the University of Hartford's live grad program when students are
dissatisfied with online learning. (UConn's MSA program is an online program.)
I have noticed that if students hang in, however, their attitude frequently
changes. By the time my students take me for my second online class, most
respond to my survey question re: online vs live preference by choosing
online. I thank Bob Jensen for his kind words in yesterday's posting, but let
there be no doubt that I have students who do not like online learning. For
example, one student in my first online class said, "This experience was
very new to me and I learned a lot, but my expectations were different b/c I
didn't know this was going to be an on-line class. I don't think I could have
gotten through this class without the help and support of you and my group
members. Above I checked that I would prefer a live classroom setting. Tax can
be confusing and I think I would understand the material better if you were
telling it to me rather than me reading it on the computer. I learn better by
hearing things than by reading them. Even though this class did not completely
support my style of learning, I still think it is one of the best classes I
have taken, mostly because of the way it is structured - group work. (And also
because it has a great teacher.)" (You didn't think I would pick a
comment that didn't say something positive about me, did you? ;-)) And "I
just think that as much as we interacted with you Dunbar, it's just that much
harder because in the end, all of your hard work making the content modules,
etc. has to be self-taught on a level that I don't think any of us are
accustomed to (or fully capable of yet)."
I am very interested in learning more about Canada's
experience with the Certified General Accountants online courses. I didn't
realize that live classes were an option. Has anyone compared outcome results
for live/online vs strictly online students?
Dunbar
Reply from Thomas C. Omer (E-mail) [tcomer@UIC.EDU]
While I haven't paid
much attention to David Noble I have paid attention to administrators whose
incentives rest on balancing the budget rather than thinking about the
educational issues that result from developing or offering online courses. It
is critical that faculty who are interested in being involved with distance
learning must show some solidarity in rejecting offers of distance learning
based on cost measures alone. We are in the business of education, after all,
not budget balancing. The extent to which administrations take advantage of
faculty members exploring new ways to educate will only reduce our educational
institutions to paper mills, a problem some might suggest is already occurring
in many settings. Think for a moment about whether the grade you assign to a
student is really within your authority, at my home institution and here at
UIUC it is not, I also do not have the ability to drop or add students to a
class. While this sounds like I am whining (I probably am), it also suggests
that my control of the factors affecting the educational experience and
outcomes is slowly degrading and adopting distance learning without explicit
contracts as to what I am allowed to do and what the administration cannot do
sets the stage for making distance learning a nightmare for me and potentially
an educational farce for students.
I think Amy's
experience has been very positive and I certainly agree that distance learning
is not for every student. Unfortunately, my first experience with developing a
curriculum based on Distance learning started with a discussion of the cost
effectiveness of the approach not the educational issues.
I now step off the
soap box,
Congratulations
Amy!!!
Thomas C. Omer
Associate Professor (Visiting) Department of Accountancy
University of Illinois at Urbana-Champaign
In the SCALE program at the University
of Illinois, where students were assigned (I don't think they could choose)
either traditional classroom sections or Asynchronous Learning sections, there
was a tendency for many students to prefer ALN sections that never met in a live
classroom. Presumably, many students prefer ALN sections even if the students
are full-time students living on campus. You can read the student evaluations at
http://w3.scale.uiuc.edu/
Also see the above discussion regarding
the SCALE Program.
The Problem of Attrition in Online
MBA Programs
We expect higher attrition rates from
both learners in taking degrees in commuting programs and most online programs.
The major reason is that prior to enrolling for a course or program, people tend
to me more optimistic about how they can manage their time between a full-time
job and family obligations. After enrolling, unforseen disasters do arise
such as family illnesses, job assignments out of town, car breakdowns, computer
breakdowns, job loss or change, etc.
The problem of online MBA attrition at
West Texas A&M University is discussed in "Assessing Enrollment and
Attrition Rates for the Online MBA," by Neil Terry, T.H.E. Journal,
Febrary 2001, pp. 65-69 --- http://www.thejournal.com/magazine/vault/A3299.cfm
Enrollment and
Attrition Rates for Online Courses
Bringing education to
students via the Internet has the potential to benefit students and
significantly increase the enrollment of an institution. Student benefits
associated with Internet instruction include increased access to higher
education, flexible location, individualized attention from the instructor,
less travel, and increased time to respond to questions posed by the
instructor (Matthews 1999). The increase in educational access and convenience
to the student should benefit the enrollment of an institution by tapping the
time- and geographically-constrained learner. The results presented in Table 1
indicate that online courses are doing just that. Specifically, Internet
courses averaged higher enrollments than the campus equivalents in 12 of the
15 business courses. The online delivery had an overall average of 34 students
per course, compared to only 25 students in the traditional campus mode.
Although enrollment
is relatively high, it is also important to note that the attrition rate was
higher in 13 of the 15 online courses. Potential explanations for the higher
attrition rates include students not being able to adjust to the self-paced
approach in the virtual format, the rigor of study being more difficult than
students anticipated, and a lack of student and faculty experience with the
instruction mode. A simple sign test reveals that enrollment and attrition
rates are both statistically greater in the online format (Conover 1980).
Table
1, Average Enrollment and Attrition Rates for Campus and Online
Courses |
Course Name |
Campus
Course
Enrollment (Attrition) |
Online
Course
Enrollment (Attrition) |
Financial
Accounting |
31
(22%) |
40
(16%) |
Accounting for
Decision Making |
43
(13%) |
45
(16%) |
Contemporary
Economic Theory |
11
(19%) |
13
(23%) |
Advanced
Macroeconomic Theory |
24
(15%) |
26
(19%) |
International
Economics |
13
(2%) |
48
(3%) |
Money and
Capital Markets |
14
(7%) |
44
(14%) |
Corporate
Finance |
36
(23%) |
47
(36%) |
Statistical
Methods in Business |
10
(13%) |
14
(43%) |
Quantitative
Analysis in Business |
33
(17%) |
22
(33%) |
Computer
Information Technology |
40
(7%) |
38
(5%) |
Managerial
Marketing |
11
(9%) |
19
(24%) |
Seminar in
Marketing |
23
(11%) |
50
(14%) |
Organizational
Behavior |
47
(13%) |
31
(29%) |
International
Management |
17
(26%) |
44
(27%) |
Strategic
Management |
24
(8%) |
28
(7%) |
Overall
Average |
25
(89%) |
34
(21%) |
The results shown in
Table 1 indicate that some business disciplines are more conducive to
attracting and retaining students than others are. Discipline-specific
implications include the following:
Accounting
The basic
accounting course (Financial Accounting) and the advanced accounting course
(Accounting for Decision Making) both have higher online enrollment and
attrition rates. Of primary interest is the observation that attrition rates
in the two instruction modes are comparable, contradicting the notion that the
detail-specific nature of accounting makes courses unconvertible to the online
format.
Economics
The online
versions of the basic economic course (Contemporary Economic Theory) and the
advanced economic course (Advanced Macroeconomic Theory) both have higher
enrollment and attrition rates than their classroom counterparts. The two
field courses in economics (International Economics and Money and Capital
Markets) both have online enrollments over three times greater than the campus
equivalent, indicating an extreme interest in global economic courses
delivered via the Internet.
Finance
The corporate finance course in the study had a substantially higher online
enrollment and attrition rate than its classroom counterpart. The most glaring
observation is the lack of retention in the online format. The attrition rate
in the online finance course is an alarming 36 percent, indicating that one in
three students who start the course do not complete it.
Business
Statistics
Enrollment in the basic statistics course (Statistical Methods in Business) is
slightly higher in the online mode, but enrollment in the advanced course
(Quantitative Analysis in Business) is substantially higher in the campus
mode. Attrition rates for the online statistics course are extremely high. The
43 percent attrition rate of the basic online statistics course is higher than
that of any other course in the study and may have a lot to do with campus
enrollment in the advanced statistics course being higher than the online
counterpart.
Computer
Information Systems
Enrollment and attrition rates for the Computer Information Technology
business course are not significantly different across instruction modes. The
online attrition rate of five percent is well below the overall average of 21
percent.
Marketing
The basic marketing course (Managerial Marketing) and the advanced marketing
course (Seminar in Marketing) both have higher enrollment and attrition rates
online than in the classroom. The advanced marketing course was offered four
times during the study period and averaged 50 students per course, making it
the most popular online course.
Management
The three management courses have atypical results. The online course in
Organizational Behavior has a relatively high attrition rate with lower than
average enrollment. Much like the global economic courses, enrollment in the
field course in International Management is substantially higher in the online
format. Enrollment and attrition rates for the MBA capstone course in
Strategic Management are not significantly different across instruction modes.
Conclusions
If a university
offers courses over the Internet, will anyone enroll in them? If students
enroll in a Web-based course, will they complete it or be attrition
casualties? The results of this study imply that online courses enroll more
students, but suffer from higher attrition rates than traditional campus
courses. It appears that the enrollment-augmenting advantages of
Internet-based instruction, like making it easier to manage work and school
and allowing more time with family and friends, are attractive to a
significant number of graduate business students. The sustained higher
enrollment across several business courses is a positive sign for the future
of Internet-based instruction. On the other hand, attrition appears to be a
problem with some of the online courses. Courses in the disciplines of
accounting, economics, computer information systems, marketing, and management
appear to be very conducive to the Internet format, as attrition rates are
comparable to the campus equivalents. Courses in business statistics and
finance, with attrition rates in excess of 30 percent, do not appear to be
very well suited to the Internet instruction format. An obvious conclusion is
that courses requiring extensive mathematics are difficult to convert to an
Internet instruction format. It is important to note that results of this
study are preliminary and represent a first step in an attempt to assess the
effectiveness of Internet-based instruction. Much more research is needed
before any definitive conclusions can be reached.
A Worst-Case MOO
"Students’ Distress with a
Web-based Distance Education Course: An Ethnographic Study of Participants'
Experiences"
http://www.slis.indiana.edu/CSI/wp00-01.html
Noriko Hara SILS Manning Hall
University of North Carolina at Chapel Hill Chapel Hill, North Carolina 27599 haran@ils.unc.edu
Rob Kling The Center for Social
Informatics SLIS Indiana University Bloomington, IN 47405 kling@indiana.edu
http://www.slis.indiana.edu/kling
(812) 855-9763
Many advocates of
computer-mediated distance education emphasize its positive aspects and
understate the kinds of communicative and technical capabilities and work
required by students and faculty. There are few systematic analytical studies
of students who have experienced new technologies in higher education. This
article presents a qualitative case study of a web-based distance education
course at a major U.S. university. The case data reveal a topic that is
glossed over in much of the distance education literature written for
administrators, instructors and prospective students: students' periodic
distressing experiences (such as frustration, anxiety and confusion) in a
small graduate-level course due to communication breakdowns and technical
difficulties. Our intent is that this study will enhance understanding of the
instructional design issues, instructor and student preparation, and
communication practices that are needed to improve web-based distance
education courses.
Bob Jensen's Comments
Th Hara and King study mentioned above focuses upon student messages, student
evaluations, and instructor evaluations of a single course. The
interactive communications took place using MOO software that is sometimes used
for virtual classroom settings, although the original intent of both MOO and MUD
software was to create a virtual space in text in which students or game users
create their own virtual worlds. You can read more about MUD and MOO
virtual environments at http://www.trinity.edu/~rjensen/245glosf.htm#M-Terms.
In some universities, MOO software has been used to create virtual
classrooms. In most instances, however, these have given way to multimedia
virtual classrooms rather than entirely text-based virtual
classrooms.
MOO classrooms have been used very
successfully. For example, at Texas Tech University, Robert Ricketts has
successfully taught an advanced tax course in a MOO virtual classroom when
students are scattered across the U.S. in internship programs. His course
is not an internship course. It is a tax course that students take while
away from campus on internships. Professor Ricketts is a veteran tax
instructor and taught the MOO course under somewhat ideal conditions. The
students were all familiar with electronic messaging and they all know each
other very well from previous onsite courses that they took together on the
Texas Tech Campus in previous semesters. They also had taken previous
courses from Professor Ricketts in traditional classroom settings.
In contrast to Professor
Ricketts' MOO virtual classroom, the Hara and King study reported above is
almost a worst-case scenario in a MOO virtual classroom. The instructor
was a doctoral student who had never taught the class before, nor had she ever
taught any class in a MOO virtual classroom. Half the class "had only
minimal experience with computers" and had never taken a previous distance
education course. The students had never taken a previous course of any
type from the instructor and did not know each other well. The course
materials were poorly designed and had never been field tested. Students
were hopelessly confused and did not deal well with text messaging (graphics,
audio, and video were apparently never used in the course). This seems
utterly strange in an age where text, graphics, audio, and even video files can
be attached to email messages. It also seems strange that the students
apparently did not pick up the telephone when they were so confused by the
networked text messaging.
One of the most important things to be
learned from the Hara and King study is the tendency for hopelessly confused
students to often give up rather than keep pestering the instructor or each
other until they see the light. Instructors cannot assume that students
are willing to air their confusions. A major reason is a fear of airing
their ignorance. Another reason is impatience with the slowness of text
messaging where everything must be written/read instead of having conversations
with audio or full teleconferencing.
In summary, the Hara and King study is
not so much a criticism of distance education as it is a study of student
behavior in settings where the distance education is poorly designed and
delivered. A similar outcome is reported in "Student Performance In
The Virtual Versus Traditional Classroom," by Neil Terry, James Owens and
Anne Macy, Journal of the Academy of Business Education, Volume 2, Spring
2001 --- http://www.abe.villanova.edu/tocs01.html.
An earlier report on this topic appears in entitled "Student and Faculty
Assessment of the Virtual MBA: A Case Study," by Neil Terry,
James Owens, and Anne Macy, Journal of Business Education, Volume 1, Fall
2000, 33-38 --- http://www.abe.villanova.edu/tocf00.html.
The article points out how badly many
students want online MBA programs and how difficult it is to deliver an online program
where students perform as well as in a traditional classroom. In
particular, too many things get confounded to evaluate the potential of online
learning. For example, faculty are seldom veterans in online delivery at
this stage of development of online learning. Faculty are often not top
faculty who are so involved in research projects that they balk at having to
develop online learning materials. And the materials themselves are seldom
ideal for online learning in terms of streaming audio/video, online mentors who
are experts on the course topics, and daily interactive feedback regarding
learning progress.
The online degree program is from Texas
A&M University (WT) in the Texas Panhandle. The above Owens and Macy
(2000) article points out that student evaluations of the
program were quite low (1.92 on a five-point scale where 5.00 is the highest
possible rating) but the perceived need of the program is quite high (3.30 mean
outcome). Over 92% of the students urged continuation of the program in
spite of unhappiness over its quality to date. In another survey, eight
out of twelve faculty delivering the courses online "feel the quality of
his/her virtual course is inferior to the quality of the equivalent campus
course." However, ten of these faculty stress that they "will
significantly improve the quality of the virtual course the next time it is
taught via the Internet format." The above Owens and Macy (2001)
study reports that online students had 14% lower test performance than the
traditional classroom control group. This is contrary to the University of
Illinois SCALE outcomes where online students tend to perform as well or
better. See http://faculty.trinity.edu/rjensen/255wp.htm#Illinois.
A major complaint of the faculty is
"the time required to organize, design, and implement a virtual
course."
This study is consistent with the many
other startup online education and training programs. The major problem is
that online teaching is more difficult and stressful than onsite teaching.
A great deal of money and time must be spent in developing learning materials
and course delivery has a steep learning curve for instructors as well as
students.
A portion of the conclusion of the
study is quoted below:
The results of
this MBA case study present conflicted views about online instruction. Both
the critics who worry about quality and the advocates who contend students
want online courses appear to be correct based upon this case study.
While a majority of students acknowledge the benefits of Internet instruction,
they believe that the online instruction is inferior to the traditional
classroom. A significant number of students are not satisfied with the
Internet program and none of the students want an entirely virtual program.
However, most students want online instruction to continue and plan on
enrolling in one or more future courses. Faculty
members recognize the flexibility advantage of Internet-based instruction but
express concerns over the time-intensive nature of the instruction mode and
the impact of student course evaluations on promotion and tenure.
The conclusions of this article are in
line with my Advice to New Faculty at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm
You can read more about assessment of
virtual courses in the "assessment" category at http://faculty.trinity.edu/rjensen/bookbob2.htm
Reply from Patricia Doherty [pdoherty@BU.EDU]
The New York Times
had an article (I believe it was the Sunday, November 19, edition, that
addressed the perception among recruiters of online MBA programs. The jist of
it was that there are many mediocre programs, but a few very good ones. The
students are enthusiastic about the benefits they provide, but the business
community (i.e. the ones who the students hope will hire them) are still
skeptical.
pat
Reply from Eckman, Mark S, CFCTR [meckman@att.com]
Reading the comments
on motivation reminded me of a quote from Bernard Baruch that tells me a lot
about motivation.
"During my
eighty-seven years I have witnessed a whole succession of technological
revolutions. But none of them has done away with the need for character in
the individual or the ability to think."
While character
development and critical thinking may not be the most important items
considered in development of curriculum or materials for the classroom, they
can be brought into many accounting discussions in terms of ethical
questions, creativity in application or simple 'what if' scenarios. People
have many motivations. Sometimes you can motivate people, sometimes you
can't. Sometimes motivations rise by themselves.
Thinking back to
undergraduate times, I still remember the extreme grading scale for
Accounting 101 from 1974. It started with 97-100 as an A and allowed 89 as
the lowest passing grade. The explanation was that this was the standard the
profession expected in practice. I also remember 60% of the class leaving
when that scale was placed on the board! They had a different set of
motivations.
Bernard Baruch
Bob Jensen's reply to a message from
Craig Shoemaker
Hi Craig,
You have a lot in common with John
Parnell. John Parnell (Head of the Department of Marketing & Management at
Texas A&M) opened my eyes to the significant thrust his institution is
making in distance education in Mexico as well as parts of Texas. After two
semesters, this program looks like a rising star.
Dr. Parnell was my "Wow Professor
of the Week" on September 26, 2000 at http://faculty.trinity.edu/rjensen/book00q3.htm#092600
You can read more about his program at the above website.
Congratulations on making this thing
work.
Bob (Robert E.) Jensen Jesse H.
Email: rjensen@trinity.edu http://www.trinity.edu/rjensen
-----Original Message-----
From: docshoe1 [mailto:docshoe1@home.com]
Sent: Sunday, November 26, 2000 11:25 AM
To: rjensen@trinity.edu Subject: Education -- Online
HI Bob,
I read with interest
your note regarding online education. I just concluded teaching my first one.
It was a MBA capstone course -- Buisness Planning Seminar. I had 16 students
spread throughout the USA and Mexio. The course requirement was to write and
present, online, a business plan consisting of a extensive marketing plan,
operations plan and financial plan. Without knowing each other, the students
formed teams of 4. The student commitment required 15-20 hours per week.
I held weekly
conference calls with each team, extensively used chat rooms for online
discussion and e-mailed some team nearly every day. The requirement of my time
was at least twice that if I would have had one 3 1/2 hour class each week.
The written plans and
the online presentations were quite thorough and excellent. The outcome was,
in many ways, better due to the extensive and varied communications media
used. My student evaluations were as high as when I have done the course
"live" in class. The "upfront" work to prepare the course
was extensive.
Craig
Craig Shoemaker,
Ph.D.
Associate Professor
St. Ambrose University
Davenport, Iowa
Some Technology Resources Available to Educators
"Accountability: Meeting The Challenge With Technology," Technology
& Learning, January 2002, Page 32 --- http://www.techlearning.com/db_area/archives/TL/2002/01/accountb.html
"Teaching College Courses Online vs. Face-to-Face," by Glenn Gordon
Smith, David Ferguson, Mieke Caris. T.H.E. Journal, April 2001, pp.
18-26. http://www.thejournal.com/magazine/vault/A3407.cfm
We interviewed 21 instructors who had taught both in
the distance and the face-to-face format. The instructors ranged from assistant
professors to adjunct professors. Fifteen of the 21 instructors taught in the
context of the SUNY Learning Network, a non-profit, grant-funded organization
that provides the State Universities of New York (SUNY) with an infrastructure,
software, Web space and templates for instructors to create their online course.
The Learning Network also provides workshops on developing and teaching online
courses, a help desk and other technical support for Web-based distance
education. The remaining six informants taught Web-based distance education
courses in similarly supported situations at state universities in California
and Indiana.
. . .
Once the course begins, the long hours continue.
Online instructors must log on to the course Web site at least three or four
times a week for a number of hours each session. They respond to threaded
discussion questions, evaluate assignments, and above all answer questions
clearing up ambiguities, often spending an inordinate amount of time
communicating by e-mail. The many instructor hours spent online create an
"online presence," a psychological perception for students that the
instructor is out there and is responding to them. Without this, students
quickly become insecure and tend to drop the class.
This great amount of work sounds intimidating;
however, most online instructors looked forward to their time spent online as
time away from their hectic face-to-face jobs. One respondent commented:
"This is why I like the online environment. It's kind of a purified
atmosphere. I only know the students to the extent of their work. Obviously
their work is revealing about them."
The Web environment presents a number of educational
opportunities and advantages over traditional classes, such as many
informational resources that can be seamlessly integrated into the class.
Instructors can assign Web pages as required reading, or have students do
research projects using online databases. However, it is important that the
instructor encourage the students to learn the skills to differentiate valid
and useful information from the dregs, as the Internet is largely unregulated.
Some instructors also had online guests in their
classes (authors, experts in their field, etc.) residing at a distance, yet
participating in online threaded discussions with the students in the class.
All these things could theoretically be accomplished in a traditional class by
adding an online component; however, because online classes are already on the
Web, these opportunities are integrated far more naturally.
Other advantages of online classes result from
psychological aspects of the medium itself. The emphasis on the written word
encourages a deeper level of thinking in online classes. A common feature in
online classes is the threaded discussion. The fact that students must write
their thoughts down, and the realization that those thoughts will be exposed
semi-permanently to others in the class seem to result in a deeper level of
discourse. Another response stated:
"The learning appears more profound as the
discussions seemed both broader and deeper. The students are more willing to
engage both their peers and the professor more actively. Each student is more
completely exposed and can not simply sit quietly throughout the semester.
Just as the participating students are noticeable by their presence, the
non-participating students are noticeable by their absence. The quality of
students' contributions can be more refined as they have time to mull concepts
over as they write, prior to posting."
The asynchronous nature of the environment means that
the student (or professor) can read a posting and consider their response for
a day before posting it. Every student can and, for the most part, does
participate in the threaded discussions. In online classes, the instructor
usually makes class participation a higher percentage of the class grade,
since instructor access to the permanent archive of threaded discussions
allows more objective grading (by both quantity and quality). This differs
from face-to-face classes where, because of time constraints, a relatively
small percentage of the students can participate in the discussions during one
class session. Because of the lack of physical presence and absence of many of
the usual in-person cues to personality, there is an initial feeling of
anonymity, which allows students who are usually shy in the face-to-face
classroom to participate in the online classroom. Therefore it is possible and
quite typical for all the students to participate in the threaded discussions
common to Web-based classes.
This same feeling of anonymity creates some political
differences, such as more equality between the students and professor in an
online class. The lack of a face-to-face persona seems to divest the professor
of some authority. Students feel free to debate intellectual ideas and even
challenge the instructor. One respondent stated that "In a face-to-face
class the instructor initiates the action; meeting the class, handing out the
syllabus, etc. In online instruction the student initiates the action by going
to the Web site, posting a message, or doing something. Also, I think that
students and instructors communicate on a more equal footing where all of the
power dynamics of the traditional face-to-face classroom are absent."
Students are sometimes aggressive and questioning of
authority in ways not seen face-to-face. With the apparent anonymity of the
Internet, students feel much freer to talk. "Students tended to get
strident with me online when they felt frustrated, something that never
happened in face-to-face classes because I could work with them, empathize and
problem solve before they reached that level of frustration," noted one
respondent.
In the opening weeks of distance courses, there is an
anonymity and lack of identity which comes with the loss of various channels
of communication. Ironically, as the class progresses, a different type of
identity emerges. Consistencies in written communication, ideas and attitudes
create a personality that the instructor feels he or she knows.
"Recently I had printed out a number of student
papers to grade on a plane. Most had forgotten to type their names into their
electronically submitted papers. I went ahead and graded and then guessed who
wrote each one. When I was later able to match the papers with the names, I
was right each time. Why? Because I knew their writing styles and interests.
When all of your communication is written, you figure out these things
quickly."
This emergence of online identity may make the whole
worry of online cheating a moot point. Often stronger one-to-one relationships
(instructor-student and student-student) are formed in online courses than in
face-to-face classes.
Conclusions
Contrary to intuition, current Web-based online
college courses are not an alienating, mass-produced product. They are a
labor-intensive, highly text-based, intellectually challenging forum which
elicits deeper thinking on the part of the students and which presents, for
better or worse, more equality between instructor and student. Initial
feelings of anonymity notwithstanding, over the course of the semester,
one-to-one relationships may be emphasized more in online classes than in more
traditional face-to-face settings.
With the proliferation of online college classes, it
is important for the professor to understand the flavor of online education
and to be reassured as to the intellectual and academic integrity of this
teaching environment.
Bob Jensen's Recap:
- Teaching online is harder work.
- Some instructors also had online guests in their classes (authors, experts
in their field, etc.) residing at a distance, yet participating in online
threaded discussions with the students in the class. (Sharon
Lightner also uses this approach to bringing accounting standard setters and
practitioner experts into her online international accounting course.
See http://faculty.trinity.edu/rjensen/255light.htm
)
- Other advantages of online classes result from psychological aspects of
the medium itself. The emphasis on the written word encourages a deeper
level of thinking in online classes. A common feature in online classes is
the threaded discussion. The fact that students must write their thoughts
down, and the realization that those thoughts will be exposed
semi-permanently to others in the class seem to result in a deeper level of
discourse.
- The asynchronous nature of the environment means that the student (or
professor) can read a posting and consider their response for a day before
posting it.
- The lack of a face-to-face persona seems to divest the professor of some
authority. Students feel free to debate intellectual ideas and even
challenge the instructor.
- This emergence of online identity may make the whole worry of online
cheating a moot point. Often stronger one-to-one relationships
(instructor-student and student-student) are formed in online courses than
in face-to-face classes.
- Contrary to intuition, current Web-based online college courses are not an
alienating, mass-produced product. They are a labor-intensive, highly
text-based, intellectually challenging forum which elicits deeper thinking
on the part of the students and which presents, for better or worse, more
equality between instructor and student. Initial feelings of anonymity
notwithstanding, over the course of the semester, one-to-one relationships
may be emphasized more in online classes than in more traditional
face-to-face settings.
"Distance Learning in Accounting: A Comparison Between a Distance
and a Traditional Graduate Accounting Class," by Margaret Gagne and Morgan
Shepherd, T.H.E. Journal, April 2001, pp. 58-65 --- http://www.thejournal.com/magazine/vault/A3433.cfm
This study analyzed the performance of two class
sections in an introductory graduate level accounting course in the fall
semester of 1999. One section was a traditional, campus-based class taught in
the conventional face-to-face lecture mode. The other section was taught in a
distance education format. In the distance class, the students had no
face-to-face contact with each other or the instructor. The distance students
could communicate via telephone, e-mail, threaded bulletin board discussions
and synchronous chat technologies. Except for the textbook, the distance class
received all material for the course over the Internet. The distance section
received supplemental administrative and course information, e.g., solutions
to assigned problems, via the Web. These materials were distributed to the
campus-based students during class.
To enhance comparability, the same text, syllabus,
assignments and examinations were used in both classes. The professor (who has
over 12 years of experience teaching accounting) taught both sections.
The traditional section met once a week over a
17-week semester. Each class lasted two and a half hours. During class,
approximately half of the time was spent presenting and explaining material
from the text; the remaining class time was used to go over the assigned
homework problems.
The distance section never formally met during the
same 17-week period. In an effort to provide more of a "class"
feeling, the students and instructor placed profiles on the class Web site.
These profiles were intended to give a personal and professional perspective
of the individuals. They included information such as work history, family
history, favorite hobbies, geographic location, and other miscellaneous
information that may help give a sense of who the student is. Many
participants uploaded a picture to give others more of an idea of who they
are.
. . .
Summary
The findings of this paper supported prior research:
the performance of students in a distance course was similar to the
performance of students in the on-campus course for an introductory accounting
graduate class. Furthermore, the students' evaluations of the course were
similar, although students in the online course indicated that they were less
satisfied with instructor availability than the in-class students. In terms of
student performance, there did not seem to be a difference between the
multiple choice exam format and the complex problem solving exam format.
Future research in this area should center on the
issue of improving student perception of instructor availability. Is a richer
medium required (i.e. video), or can certain procedures be incorporated to
help students feel as if the instructor is more available? This theme can be
carried out across different subjects to see if some subjects are more prone
to the student perception problem than others. At least in this graduate level
introductory accounting course, it appears as if distance education delivery
is as effective as the traditional campus methodology in terms of student
learning outcomes.
From Infobits on September 28, 2001
ONLINE LEARNING VERSUS CLASSROOM LEARNING
Much research into the efficacy of online learning over classroom learning
has been anecdotal and of questionable quality, leading to inconclusive
results and the need for further study. Two recent articles in the JOURNAL OF
INTERACTIVE INSTRUCTION DEVELOPMENT address this question of efficacy.
Terrence R. Redding and Jack Rotzein ("Comparative Analysis of Online
Learning Versus Classroom Learning," Journal of Interactive Instruction
Development, vol. 13, no. 4, Spring 2001, pp. 3-12) compare the learning
outcomes associated with three classroom groups and an online community
college group in pre-licensing insurance training. They conclude that
"online instruction could be highly effective" and that a
"higher level of cognitive learning was associated with the online
group." They also note that higher achievements of the online group can
be attributed to the self-selected nature of the students, the instructional
design of the online course, and the motivation associated with adult
learners. Redding and Rotzein recommend that further studies be conducted in
other fields of study to see if their results can be replicated in other
professions or disciplines.
In the same issue Kimberly S. Dozier (Assistant Professor of English,
Dakota State University) urges restraint in rushing to replace traditional
classroom courses with online classes ("Affecting Education in the
On-Line 'Classroom': The Good, the Bad, and the Ugly," ," Journal of
Interactive Instruction Development, vol. 13, no. 4, Spring 2001, pp. 17-20).
She cautions educators "not to forget what makes us teachers and what
makes us learners. We must not forget the limitations of technology and we
must not assume that an on-line course duplicates a traditional course."
One of the aspects of learning that she fears may be missing in some online
learning experiences is self-reflection as students are "simply
responding to a specified task and moving on to the next one."
Note: neither article is available on the Web. Check with your college or
university library to obtain copies.
Journal of Interactive Instruction Development [ISSN 1040-0370] is
published quarterly by the Learning Technology Institute, 50 Culpeper Street,
Warrenton, VA 20186 USA; tel: 540-347-0055; fax: 540-439-3169; email: info@lti.org;
Web: http://www.lti.org/
From Syllabus News on October
18, 2002
Online Nurse Ed
Service Accredited in 50 States
eMedicine Inc., an
online service for health care professionals, said it received approval to
offer accredited nursing continuing education in California, and can now offer
accredited nursing continuing education courses in all 50 states. The service
offers over 40,000 hours of continuing education for nurses, physicians,
pharmacists and optometrists, of which 10,000 hours are available for nurses.
Accreditation for eMedicine nursing CE is provided through the University of
Nebraska Medical Center's College of Nursing Continuing Nursing Education
program. Catherine Bevil, director of continuing nursing education in UNMC’s
College of Nursing, said the service’s “large audience and commitment to
creating current clinical information … provides an effective outlet for
delivering UNMC's College of Nursing continuing nursing education courses.”
Success
Stories in Education Technology
LearningSoft Awarded Patent for Adaptive Assessment System
From T.H.E. Journal Newsletter on March 30, 2006
The U.S. Patent and Trademark Office has granted
LearningSoft LLC (
http://www.learningsoft.net ) a patent titled
"Adaptive Content Delivery System and Method," which covers the company's
proprietary Learningtrac adaptive assessment system. Learningtrac uses
artificial intelligence to optimize assessment and test preparation for
individual students' strengths and weaknesses. The system uses a student's
own knowledge base, learning patterns, and measures of attention to the
material to continually adapt curriculum content to the student's needs and
spur skill development. Educators are then able to monitor individual
student assessments as well as track classroom progress. Later this year,
Learningtrac will be integrated into LearningSoft's Indigo Learning System,
which is debuting at the 2006 Florida Educational Technology Conference.
Integrate Technology into Lesson Plans
"Better teaching with technology: Program aims to help integrate
technology into lesson plans, by Micholyn Fajen, The Des Moines Register,
February 8, 2005 --- http://desmoinesregister.com/apps/pbcs.dll/article?AID=/20050208/NEWS02/502080339/1004
Educators from two Waukee elementary schools will
learn new ways to implement technology into classroom curriculum by
participating in free training sessions through Heartland Area Education
Agency.
Technology teams from both Brookview and Eason
elementary schools will attend technology integration mentoring, part of a
three-year-old Heartland program offered to area schools. This is Waukee's
first time attending at the elementary level.
"The intent of the program is to provide
participants with skills and strategies that prepare them to mentor other
educators in the technology integration process," said Cindi McDonald,
principal of Brookview.
Building principals and district administrators began
instruction Wednesday. Meetings will continue into June.
The training will help educators learn to get the
most of the technology they use.
"I've worked in six different districts and can
say Waukee is very blessed to have a lot of hardware functioning here,"
McDonald said. "We have a commitment to have the computers, teachers and
staff positioned in a way that we can make a difference."
Brian Pierce, technology teacher at Brookview
Elementary in West Des Moines, hopes to gain more tactics that help him
approach classroom teachers and show them how to integrate the skills into
everyday learning.
"We have a mobile computer lab with 15 laptops
teachers can pull into the classroom," Pierce said. "We want to
optimize this lab with kindergarten through fifth-grade teachers and find
curriculums and technological links to make that happen."
Some Brookview classrooms already integrate
technology into their homework. Fourth-graders recently assigned a report of a
famous person are researching information over the Internet and creating
presentations on the computer.
Pierce is teaching the students how to drop their
presentations into Power Point and will help them burn a CD so they can take
the work home to show parents.
"We have some good and effective uses of
technology here," McDonald said. "There are pockets of greatness,
but we still need to build a common vocabulary among teachers. Our
kindergarten through second grade still struggle in that area."
Three out-of-state technology consultants were
brought in to teach the program and of 55 school districts in Heartland's
region, 15 districts have teams that will attend, coming from as far as
Carroll.
"We've had good responses from past
participants," said Tim Graham, director of Heartland technology
services. "This year we've modified the program to include administrators
because teachers found they needed upper-level support of the programs.
Administrators needed a better understanding of how important technology is in
the classroom."
Continued in the article
Teens praise online
algebra lessons, March 30, 2004 --- http://the.honoluluadvertiser.com/article/2004/Mar/30/ln/ln17a.html
The school has enough textbooks, but the students
don't need them in Yvette McDonald's algebra class at Kahuku High and
Intermediate School.
Kahuku students, from left, Brendan Melemai, Daesha
Johnson and James Bautista use computers instead of books in algebra class.
The interactive computer program was developed last year by Honolulu Community
College. Jeff Widener • The Honolulu Advertiser
And that's a good thing.
It's because her students in grades 9 through 12
learn math not with books but through an interactive computer program
developed last year by Honolulu Community College and being piloted in four
Hawai'i high schools, a middle school and a community college.
With this new approach, the hope is to boost high
school math scores and cut down on expensive and time-consuming remedial math
in college.
"It's pretty good," said 17-year-old James
Bautista Jr., peering intently at the algebra equation on the screen before
choosing the correct answer from several suggestions.
"Sometimes teachers make it harder than it
really is. If I see it first and try to understand it myself without the
teacher dictating, it's kind of better. When I'm pressured into it, I'm not
good. I'm better at this where I can take my time."
While it's too soon to know if this online algebra
class will improve high school math scores, end-of-semester assessment testing
at HCC in mid-May will show how it's working among college students.
Assessment testing will be done in high schools next year.
"They should have this at Waialua,"
Bautista said. "I failed math at Waialua twice — algebra and geometry.
The teacher's a cool guy, but he's so quick I had a hard time keeping up with
him."
"It's so much easier," agrees 17-year-old
Francisco "Pancho" Peterson. "If you click on the magnifying
glass, it shows you the procedure of what you should know, and that helps a
lot. It shows you what to do. In a way, it's like a big cheat sheet to figure
out what you did."
Continued in the
article
From the June 25 edition of Syllabus
News
Wharton webCafe
Earns High Satisfaction Ratings
A survey of students
of the Wharton School at the University of Pennsylvania found that 97 percent
rated the school's web-based virtual meeting application -- dubbed web Cafe --
as valuable to their education experience. Since Wharton began using webCafe
in 1998 as part of the school's student intranet, use of webCafe has expanded
to 5,200 users, 99 percent of full-time MBA candidates, all executive MBA
students, and almost all Wharton undergraduates. webCafe is one component of
Wharton's plan to reshape its business education. The school's Alfred West Jr.
Learning Lab is exploring methods of learning and instruction using
interactive multimedia and real-time simulations. This August, it is opening
Jon M. Huntsman Hall, which Wharton claims will be the largest and most
sophisticated instructional technology center at any business school.
For more information,
visit: http://www.wharton.upenn.edu/learning
Top K12's 100 Wired Schools --- http://FamilyPC.com/smarter.asp
The winners are listed at http://familypc.com/smarter_2001_top.asp
Why (Some) Kids Love School --- http://familypc.com/smarter_why_kids.asp
Dropout rates are down and test scores are up.
Students are engaged in learning and their self-esteem is soaring. So what's
really going on within the classroom walls of the country's top wired schools?
By Leslie Bennetts
Once upon a time, back in the olden days, kids used
to exult about getting out of school, celebrating their release from drudgery
by singing "No more pencils, no more books!" or so the schoolyard
ditty would have it. These days, with the explosion of technology that's
revolutionizing education around the country, many students are now eager to
stay after school, competing for access to all the high-tech equipment that's
opening up so many new opportunities to them.
For younger kids, technology is transforming the
schoolwork their older siblings sometimes regarded as tedious into challenging
games and activities. For high-school students, technology may banish once and
for all the tired questions about relevance. Even the most rebellious
adolescents are aware of the real-world value of the skills and experience
they're getting in wired schools.
Teachers who have mastered the art of integrating
technology into the curriculum also deserve credit. For a closer look at some
of the ways educators are transforming American schools, here are six
outstanding examples from this year's Top 100 Wired Schools—two elementary,
two middle, and two high schools that have applied creativity as well as
resources to the educational challenges of the 21st century.
"Using Hypertext in Instructional
Material: Helping Students Link Accounting Concept Knowledge to Case
Applications," by Dickie Crandall and Fred Phillips, Issues in
Accounting Education, May 2002, pp. 163-184 --- http://accounting.rutgers.edu/raw/aaa/pubs.htm
We
studied whether instructional material that connects accounting concept
discussions with sample case applications through hypertext links would enable
students to better understand how concepts are to be applied to practical case
situations. Results from a laboratory experiment indicated that students who
learned from such hypertext-enriched instructional material were better able
to apply concepts to new accounting cases than those who learned from
instructional material that contained identical content but lacked the
concept-case application hyperlinks. Results also indicated that the
learning benefits of concept-case application hyperlinks in instructional
material were greater when the hyperlinks were self-generated by the students
rather than inherited from instructors, but only when students had generated
appropriate links. When students generated inappropriate concept-case
application hyperlinks in the instructional material, the application of
concepts to new cases was similar to that of other students who learned from
the instructional material that lacked hyperlinks.
The 2002 CWRL Colloquium (Computers,
Writing, Research, and Learning) --- http://www.cwrl.utexas.edu/currents/
Teaching Versus Research
"Favorite Teacher" Versus "Learned the Most"
It may take years for a graduate to change an evaluation of an instructor
One of Sanford's key points is that it may take years
for a student to fully appreciate the quality of his or her education. What
might have seemed tedious, dull, or unimportant at the time may, in the long
run, turn out to be more valuable to a person's life than that which seemed
immediate and exciting in the classroom. Unfortunately, as Sanford notes, that
long-term value often is not captured in the immediacy of student evaluations of
instruction. Wise department chairs and deans take that into account when
reviewing those evaluations. But, here at Krispy Kreme U. not all department
chairs and deans are wise.
Mark Shapiro commenting on a piece by Sanford Pinsker, "You Probably Don't
Remember Me, But....," The Irascible Professor, July 12, 2006 ---
http://irascibleprofessor.com/comments-07-12-06.htm
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
In
the movie
“Ghostbusters,” Dan Aykroyd commiserates with Bill
Murray after the two lose their jobs as university researchers. “Personally, I
like the university. They gave us money and facilities, and we didn’t have to
produce anything. You’ve never been out of college. You don’t know what it’s
like out there. I’ve worked in the private sector. They expect results.” I can
find some amusement in this observation, in a self-deprecating sort of way,
recognizing that this perception of higher education is shared by many beyond
the characters in this 1980s movie.
Jeremy Penn, "Assessment for ‘Us’
and Assessment for ‘Them’," Inside Higher Ed, June 26, 2007 ---
http://www.insidehighered.com/views/2007/06/26/penn
Why then do the studies show that a faculty member's
research activity and his or her teaching performance basically are uncorrelated
(neither positively correlated nor negatively correlated)? My best guess is that
these studies have fundamental flaws. After reading some of Nils' references as
well as more recent work on the subject, I believe that most of these studies
measure both teaching effectiveness and research activity incorrectly. On the
teaching effectiveness side, student evaluations of teaching often are the only
measure used in those studies; and, on the research productivity side generally
only numbers of publications are counted. Neither of these data points really
measure quality. The student evaluations often are highly correlated with the
grade that a student expects to receive rather than how much the student has
learned. Faculty members who are engaged in research often are demanding of
themselves as well as their students, so that may skew their student
evaluations. Measuring research activity by the number of papers published tends
to skew the results towards those faculty members who would view themselves
primarily as researchers and teachers of graduate students rather than as
teacher scholars who devote as much effort to their teaching as to their
research. In fact one of the correlations observed in the research is that those
faculty members who publish the most often have less time available to devote to
their teaching.
Nils Clausson, "Is There a Link
Between Teaching and Research?" The Irascible Professor, December
30, 2004 --- http://irascibleprofessor.com/comments-12-30-04.htm
Jensen Comment: By definition successful research is a contribution to new
knowledge. It cannot be conducted without scholarship mastery of existing
knowledge on the topic at hand. What Clausson seems to imply is that a
great teacher can have terrific and ongoing scholarship without adding to the
pile of existing knowledge. There also is the question of great
facilitators of research who do not publish. These professors are
sometimes great motivators and advisors to doctoral students. Examples
from accounting education include the now deceased Carl Nelson at the University
of Minnesota and Tom Burns from The Ohio State University. My point is
that great teachers come in all varieties. No one mold should ever be
prescribed like is often done in today's promotion and tenure committees that
sometimes discourage fantastic teaching in favor of uninteresting publication.
December 31, 2004 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU]
Shapiro stated, “No one became an astronomer, or an
economist, or and English professor in order to teach students astronomy,
economics, or English literature. I certainly didn't.”
Au contraire. I think a lot of us entered PhD
programs because we wanted to teach. I think that teaching and research are
positively correlated because scholarship is infused with curiosity and care.
Amy Dunbar
UConn
December 31, 2004 reply from David Fordham, James Madison University
[fordhadr@JMU.EDU]
Hear, hear! Amy, I agree. Shapiro's assertion that
"no one" gets a Ph.D. to teach is patently false, as evidenced by
the overwhelming majority of my colleagues who obtained a Ph.D. degree SOLELY
to obtain a teaching position (and many of whom eschew the superficiality of
much of today's published accounting "research").
Thus, if this statement of Shapiro's is false, why
would I believe his other statements that "not one shred of empirical
evidence" exists to relate good teaching to good research? That statement
is likely patently false too, mainly because, in my view, of issues with
construct validity issues in the studies. Look carefully at the precise
wording of my following postulates:
1. Good research does not necessarily guarantee good
teaching. (I believe anyone who has any experience in academe would have to
accept this as well-established and supported empirically.)
2. Good teaching does not necessarily require
research ... DEPENDING (a major qualifier!) ON WHAT you are trying to teach.
THIS second postulate (more specifically, the qualifying predicate!) is the
one that most of Shapiro's citations (I assume, since I must admit I haven't
read them!) likely overlooked in their studies.
There are many subjects, including MANY undergraduate
course topics, which do not require constant updating and up-to-the- minute
currency, and thus a teacher may not benefit as greatly from being active in
research in that area. For these, research does not have to be correlated to
good teaching. So it probably isn't.
But there are many other areas which probably can NOT
be taught properly by anyone who is NOT staying current with the field by
being actively immersed in the present state-of- the-art. Medicine,
Pharmacology, Genetics, Materials Science, shoot, any one of us could name
dozens. And these fields do not need empirical evidence, it is deducible by
pure logic, from the objectives of the teaching activity.
And even in these fields, doing good research does
not necessarily mean that you are a good teacher, but being a good teacher in
the field does require research.
By overlooking the characteristics of the field, the
characteristics of the course content, characteristics of the NEED of students
in the course, and similar oversights, Shapiro's researchers have confounded
their data so much that their conclusion (the lack of correlation between
research and teaching) lacks validity, even ignoring the obvious problems with
measurements that Bob pointed out.
Of course, Shapiro is a primary example of a
phenomenon I plan to be one of my best assertions: the complete replacement of
"factual reporting" with "sensationalism" in today's
communication realm.
I mean, honestly, why should accountants be different
from the rest of the world when it comes to abdicating the obligation to
report fairly, justly, objectively, and factually? The news media sure does
not report objectively (the New York Times and its affiliate the Herald
Tribune are absolute jokes when it comes to embellishment, sensationalizing,
biasing, coloring, and other departures from "reporting news", and
they are representative of their industry). Neither do other forms of
so-called "news" media, nor do practitioners of law (look at the
claims of civil rights attorneys!), politicians (nothing more need be said
here), so-called "reality TV", or any of the other professions which
the public (erroneously) is expected to perceive as communicating reality. So
why should accountants be held to a different standard than the rest of
society?
Rhetorical question, of course...
Happy New Year to anyone who reads this far on my
lengthy treatises. And Happy New Year to the others on this list, too!
David Fordham
James Madison University
January 1, 2005 reply from Bob Jensen
Hi David,
I tend to agree with everything you said below except the key phrase
"being a good teacher in the field does require research."
It would be more acceptable to me if you fine tuned the phrase to read
"being a good teacher in the field does keeping up with
research." Of course this leads head on into the brick wall of
performance reward systems that find it easier to count publications than
subjectively evaluate scholarship.
A terrific surgeon or teacher of surgery is not required to contribute to
new and unknown surgical knowledge and/or technique. A surgical researcher may
spend a lifetime endeavoring to create a new surgical technique but that
endeavoring is not a requisite for greatness as a teacher of existing best
practices. In teaching of surgery, experience is the requisite for
greatness as a teacher of existing best practices.
Nor does a great historian or history teacher have to contribute to new
knowledge of the past in order to have an outstanding preparation to teach
what is already known about the past. Although researchers are almost
paranoid to admit it, it is possible to become the world's best scholar on a
topic without extending the knowledge base of the topic.
The problem with great research discovery is that endeavoring to discover
often drains a lifetime of energy at the edge of the head of a pin, energy
that has a high probability of draining efforts to prepare to teach about the
whole pin or the pin cushion as a whole.
The key problem is having the time or energy for preparation to teach. Research
in the narrow sometimes drains from the act of preparing to teach in breadth
and length. Also knowing the
history of the narrows does not necessarily mean that the researcher
understands the history of the entire river (which is my feeling about some of
our top empirical researchers in accounting who have very little knowledge of
the history of accounting as a whole).
Rivers versus pin cushions! Am
I mixing my metaphors again?
I agree that Shapiro made a dumb comment about why we got our doctorates
and became educators. I tend to agree, however, with Nils Clausson's
conclusion that seems to be lost behind Shaphiro's dumb remark.
Bob Jensen
January 1, 2005 reply from Alexander Robin A [alexande.robi@UWLAX.EDU]
Wonderful! It is so nice to see these very reasonable
ideas articulated. The idea that keeping up in a field in order to teach it
requires active research (actually, publication numbers) rather than active
reading and study is one of those unquestioned mantras that comprise
educational mythology at most universities. I suspect the true reason for that
belief is that it is convenient - bureaucracies like easy measurements that
don't require much discernment. Counting publications is a very easy (if
erroneous) way to measure faculty performance.
Robin Alexander
January 1, 2005 reply from Dennis Beresford [DBeresfo@TERRY.UGA.EDU]
Bob
I wonder whether you could further fine tune your
comment to say, "being a good teacher does require keeping up with
developments in the field." While keeping up with research is certainly
helpful, the vast majority of accounting majors are undergrads and MAcc's who
will go into (mainly) public accounting and the corporate world. And, of
course, many of our accounting students are taking the class only as a
requirement of a different business major. I respectfully submit that knowing
what is happening in the accounting profession and broader business community
is quite important to effective teaching of those students.
Some accounting research may also be relevant,
particularly for teaching PhD students but that's a pretty tiny number.
Go Bulldogs and Trojans!
Denny Beresford
Cold and distant teaching vs. warm and close
Many instructors struggle with the role of rapport in
teaching. For some, the response is a cool and distant teaching style. This
essay argues that a style of appropriate warmth can promote student learning. It
offers definitions, examples, and implications for the instructor.
Robert F. Bruner, "'Do you Expect Me to Pander to the Students?' The Cold
Reality of Warmth in Teaching," SSRN Working Paper, June 2005 ---
http://ssrn.com/abstract=754504
As I said previously, great teachers come in about as many varieties as
flowers. Click on the link below to read about some of the varieties
recalled by students from their high school days. I t should be noted that
"favorite teacher" is not synonymous with "learned the
most." Favorite teachers are often great at entertaining and/or
motivating. Favorite teachers often make learning fun in a variety of
ways.
The recollections below tend to lean toward entertainment and "fun"
teachers, but you must keep in mind that these were written after-the-fact by
former high school teachers. In high school, dull teachers tend not to be
popular before or after the fact. This is not
always the case when former students recall their college professors.
"'A dozen roses to my favorite teacher," The Philadelphia
Inquirer, November 30, 2004 --- http://www.philly.com/mld/Inquirer/news/special_packages/phillycom_teases/10304831.htm?1c
Students may actually learn the most from pretty dull teachers with high
standards and demanding assignments and exams. Also dull teachers may also
be the dedicated souls who are willing to spend extra time in one-on-one
sessions or extra-hour tutorials that ultimately have an enormous impact on
mastery of the course. And then there are teachers who are not so
entertaining and do not spend much time face-to-face that are winners because
they have developed learning materials that far exceed other teachers in terms
of student learning because of those materials.
In some cases, the “best learning” takes place in courses where students
hate the teacher who, in their viewpoint, does not teach. In
has a lot to do with metacognition in learning. See
http://faculty.trinity.edu/rjensen/265wp.htm
Many of our previous exchanges on the AECM about these issues are at the
following links:
Grade Inflation Issues
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
Onsite Versus Online Learning
http://faculty.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline
Student Evaluations and Learning
http://faculty.trinity.edu/rjensen/assess.htm#LearningStyles
January 2, 2005 reply from MABDOLMOHAMM@BENTLEY.EDU
In search of a
definition of a "perfect teacher"
A teacher can be
excellent by having one or more of a number of attributes (e.g., motivator,
knowledgeable, researcher), but a teacher will be perfect if he/she has a
combination of some or all of these attributes to bring the best out of
students.
We all can cite
anecdotal examples of great teachers that exhibited excellence in an important
attribute. Below are three examples.
A few years ago an
economics teacher of my son in high school admitted to parents at the
beginning of the year that he had very little knowledge of the subject matter
of the economics course that he was assigned to teach. He said that the school
needed a volunteer to take an economics course and then teach it at the high
school, and he volunteered. A former college football player, the teacher had
learned to motivate others to do their best and by the end of the year he had
motivated the students to learn a lot, much of it on their own. In fact to the
pleasant surprise of the teacher and parents two student groups from this
class made it to the state competition, one of which ended up being number one
and the other ranked number 4 in the state.
I have noticed that
many of the professors getting teaching awards from our beloved AAA have also
been heavy hitters in publishing. While one can be a good teacher without
being a heavy hitter in publishing, it may be that scholarship, broadly
defined (include the knowledge of current developments) is an important factor
in being a good teacher. Even if one is not a heavy hitter, scholarship as an
exercise of the brain, makes one a better teacher.
Others argue that
students are the best judges of good teaching. I recall having read a research
piece some time ago that those who consistently rate high in student
evaluations are good teachers, while those who consistently rate low are poor
teachers. The ones in the middle are those for whom other factors may be at
work (e.g., being too demanding or a tough grader).
Here is a research
question: Do we have a comprehensive inventory of the attributes of good
teaching, and if so, is it possible to come up with combinations of various
attributes to define a "perfect teacher" or an "expert
teacher"?
Ali Mohammad J.
Abdolmohammadi, DBA, CPA
http://web.bentley.edu/empl/a/mabdolmohamm/
John E. Rhodes Professor of Accounting
Bentley College 175 Forest Street Waltham, MA 02452
January 2, 2005 reply from Van Johnson [accvej@LANGATE.GSU.EDU]
Bob--
Your post reminded me
of one of my favorite editorials by Thomas Sowell in 2002. It is included
below.
"Good"
Teachers
The next time someone
receives an award as an outstanding teacher, take a close look at the reasons
given for selecting that particular person. Seldom is it because his or her
students did higher quality work in math or spoke better English or in fact
had any tangible accomplishments that were better than those of other students
of teachers who did not get an award.
A "good"
teacher is not defined as a teacher whose students learn more. A
"good" teacher is someone who exemplifies the prevailing dogmas of
the educational establishment. The general public probably thinks of good
teachers as people like Marva Collins or Jaime Escalante, whose minority
students met and exceeded national standards. But such bottom line criteria
have long since disappeared from most public schools.
If your criterion for
judging teachers is how much their students learn, then you can end up with a
wholly different list of who are the best teachers. Some of the most
unimpressive-looking teachers have consistently turned out students who know
their subject far better than teachers who cut a more dashing figure in the
classroom and receive more lavish praise from their students or attention from
the media.
My own teaching
career began at Douglass College, a small women's college in New Jersey,
replacing a retiring professor of economics who was so revered that I made it
a point never to say that I was "replacing" him, which would have
been considered sacrilege. But it turned out that his worshipful students were
a mass of confusion when it came to economics.
It was much the same
story at my next teaching post, Howard University in Washington. One of the
men in our department was so popular with students that the big problem every
semester was to find a room big enough to hold all the students who wanted to
enroll in his classes. Meanwhile, another economist in the department was so
unpopular that the very mention of his name caused students to roll their eyes
or even have an outburst of hostility.
Yet when I compared
the grades that students in my upper level class were making, I discovered
that none of the students who had taken introductory economics under Mr.
Popularity had gotten as high as a B in my class, while virtually all the
students who had studied under Mr. Pariah were doing at least B work. "By
their fruits ye shall know them."
My own experience as
an undergraduate student at Harvard was completely consistent with what I
later learned as a teacher. One of my teachers -- Professor Arthur Smithies --
was a highly respected scholar but was widely regarded as a terrible teacher.
Yet what he taught me has stayed with me for more than 40 years and his class
determined the course of my future career.
Nobody observing
Professor Smithies in class was likely to be impressed by his performance. He
sort of drifted into the room, almost as if he had arrived there by accident.
During talks -- lectures would be too strong a word -- he often paused to look
out the window and seemingly became fascinated by the traffic in Harvard
Square.
But Smithies not only
taught us particular things. He got us to think -- often by questioning us in
a way that forced us to follow out the logic of what we were saying to its
ultimate conclusion. Often some policy that sounded wonderful, if you looked
only at the immediate results, would turn out to be counterproductive if you
followed your own logic beyond stage one.
In later years, I
would realize that many disastrous policies had been created by thinking no
further than stage one. Getting students to think systematically beyond stage
one was a lifetime contribution to their understanding.
Another lifetime
contribution was a reading list that introduced us to the writings of
top-notch minds. It takes one to know one and Smithies had a top-notch mind
himself. One of the articles on that reading list -- by Professor George
Stigler of Columbia University -- was so impressive that I went to graduate
school at Columbia expressly to study under him. After discovering, upon
arrival, that Stigler had just left for the University of Chicago, I decided
to go to the University of Chicago the next year and study under him there.
Arthur Smithies would
never get a teaching award by the standards of the education establishment
today. But he rates a top award by a much older standard: By their fruits ye
shall know them.
January 2, 2004 reply from David Fordham, James Madison University
[fordhadr@JMU.EDU]
Bob, you've hit upon
an enlightening point.
Your post about
alums' vote for "Best" teacher or "Favorite" teacher not
being the one who "taught them the most", or even "the most
entertaining" contrasts vividly with my wording when I refer to
"Excellence in teaching".
The "Best"
teacher in a student's (alums) eye isn't always the "most excellent"
teacher.
Most of us (the
public at large, even) desire to be popular, and therefore "best" or
"favorite" in anything we do. But "best" and
"favorite" are far more subjective and individual-dependent
superlatives than "most excellent".
The latter term
denotes a high level of attaining the objective of the endeavor, whereas the
former terms denote a broader array of attributes (frequently skewed more
towards personality traits) appealing to personal tastes, where the overriding
attributes do not have to be the meeting of the fundamental objectives of
'teaching'.
I (and many of my
colleagues, possibly including yourself) generally strive for excellence in
teaching, -- which often requires excellence in many other attributes
(including personality ones, too!) in order to achieve. Unfortunatly, many
students concentrate their attention on the personality- related ones. And
just as sadly, the AACSB accreditation jokers (along with elected state
legislators at the K-12 level!) concentrate only on "knowledge
transfer", "comprehension", and "measurabale
rubrics". Both of these extremes ignore the overall mix which composes
"Excellence in teaching" in terms of achieving the educational
objectives.
(And yes, I strongly
believe that educational objectives include far more than mere knowledge
transfer... they include motivation, inspiration, appreciation, and many other
currently-*unmeasurable* traits, which is why I'm such an outspoken critic of
the AACSB's "assurance of learning" shenanigans.)
By the way, if you've
read this far: Bob, I've got to admit my poor choice of wording on an earlier
post. I indicated that some fields (such as pharmacology, genetics, etc.)
require "research" to teach well -- I didn't mean to equate research
with publication as is commonly done in academe, nor did I mean to equate it
with "advancing the knowledge of mankind" as it is probably more
accurately defined. I meant that those fields require effort to stay on top of
what's happening, as you more appropriately and accurately articulated. This
can take the form of overt activity to advance the knowledge of mankind, or it
can take the form of studious and constant attention to current literature and
activity of others. (I guess that's what I get for becoming so immersed in my
genealogical "research", which for the most part consists of
studiously searching and absorbing the "literature" and activities
of others, rather than creation on my own!) Anyway, I'd also like to agree
strongly with your assertion that "excellent teachers come in all
varieties". This is another fact which further confounds the
"measurement" of excellent teaching, and is often ignored by those
in the AACSB and state legislature education committees.
David Fordham
James Madison University
January 2, 2005 reply from David Albrecht [albrecht@PROFALBRECHT.COM]
David Fordham,
Another intriguing post to the always interesting
posts of Bob Jensen.
This has caused my mind to wander, and it has stalled
wondering about the similarities between audit quality and teaching quality.
As I recall, there are three primary components or
perspectives of audit quality: the input of the auditor (new being made public
for the first time by the PCAOB), the accuracy of the auditor's output, and
the public perception of the auditor. Based on the maxim that perception is
reality, much musing and academic research has focused on the third component.
Perhaps an example will help explain what I'm getting at. For decades, the
firm of Arthur Andersen worked hard on the first two components and eventually
was bestowed with the third component. Then, according to Toffler, Squires et.
al, Brewster and some others, AA skimped on the first two components and
eventually had the third component withdrawn. Have the final biggest firms,
the Big Four, traveled the same path? I can't really tell, given the
confounding that the big firms insurance function brings to the analysis. I do
know that for the largest companies (audited by the biggest auditing firms)
the large amount of restatements causes me to doubt the amount of recent
auditor quailty.
In a fashion, there seem to be three similar
components of excellence in teaching. First, there is the input of the
teacher. There are many parts to this. There is the scholarly endeavor of
"keeping up." There is the creative thought that goes into course
design and material development. Of course, there is the preparation for each
class, and there is the classroom pedagogy. The second component would have to
be the amount and quality of learning that takes place. The third component
would be the public perception of the teacher.
With respect to the national public, it is easy to
see that many students engage the teachers from the most expensive, elite
schools. These students seem willing to pay the price needed to get that clean
opinion from the top firm, er, I mean that degree with honors from the top
school. Are these students acting in the most rational manner? It's hard to
tell. They seem to go to top research schools where they receive much of their
instruction from graduate students, many of whom lack American language and
cultural skills that I'd think necessary for much quality. Then, they get to
senior level classes and receive instruction from professors that sometimes
are too preoccupied with research to adequately shepherd their students. The
elite schools try not to mess up the good students too much. The students find
assurance in the perceived quality of the degree from the elite school
Some students, frequently the less well heeled or
from the poorest educated families, attend lower ranked schools. Dare I say a
teaching school such as Bowling Green or James Madison? Anecdotal evidence
supports the contention that my school places much emphasis on the first
two components of teaching quality and does a quality job. However, not being
one of the biggest schools does put a hurt on the perceived quality of the
educational experience here.
I wonder if the PCAOB, the auditor's auditor, will be
any better than the AACSB, the business and accounting program's auditor. I
can tell from experience that a non-elite accounting program has a difference
of opinion with the AACSB, not because its students come from around the world
(they do) or that its graduates are in high demand by national and regional
employers (they are) or that its graduates progress rapidly in their careers
(they do), but because of an insufficient number of faculty publications in
top-tier journals. I think some of the time t he AACSB misses the boat.
Will the PCAOB? I guess that will be the true test of
the similarity between auditor quality and teacher quality.
David Albrecht
Bowling Green State University
See also
Grade Inflation Versus Teaching Evaluations
Student Evaluations and
Learning Styles
There is an enormous problem of assuming that students who
wrote high evaluations of any course actually learned more than high performing
students who hated the course. Happiness and learning are two different
things.
Reasons why students often prefer online courses may have
little or nothing to do with actual learning. At the University of North
Texas where students can sometimes choose between an onsite or an online section
of a course, some students just preferred to be able to take a course in their
pajamas --- http://faculty.trinity.edu/rjensen/255wp.htm#NorthTexas
Some off-campus students prefer to avoid the hassle and time consumed driving to
campus and spending a huge amount of time searching for parking. Some
Mexico City students claim that they can save over five hours a day in commuting
time, which is time made free for studying (Jim Parnell, Texas A&M, in
partnership with Monterrey Tech, deliver an ALN Web MBA Program in Mexico City)
--- http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
In general, comparisons of onsite versus online test and
grade performance will tend to show "no differences" among good
students, because good students learn the material under varying circumstances.
Differences are more noteworthy weaker students or students who tend to drop
courses, but there is a huge instructor effect that is difficult to factor out
of such studies. For more on this, go to http://faculty.trinity.edu/rjensen/assess.htm
Online Learning Styles
Here are a few links
of possible interest with regard to student evaluations and online learning
styles. In some cases you may have
to contact to presenters to get copies of their papers.
Probably the best place to start is with the Journal
of Asynchronous Learning --- http://www.sloan-c.org/publications/jaln/index.asp
For example, one of
the archived articles is entitled “"Identifying Student Attitudes and
Learning Styles in Distance Education" in the September 2001 edition --- http://www.sloan-c.org/publications/jaln/v5n2/v5n2_valenta.asp
Three opinion types were identified in this study:
Students who identified with issues of Time and Structure in Learning, Social
Interaction in Learning, and Convenience in Learning. These opinions can be
used to aid educators in reaching their students and increasing the
effectiveness of their online courses. At UIC, this insight had direct
application to the evolution of course materials. Early application of
technology merely supplied a web site on which were posted syllabus, readings
and assignments. No opportunity existed for conferencing; thus, there existed
no opportunity for social learning. In a subsequent semester, conferencing
software was made available to the class, in addition to the website. Thus,
the opportunity was added for social learning. The faculty learned, however,
that every time a new technology was added, it experienced an increase in the
level of effort necessary to support the student. Ultimately, the University
made available a course management system, which significantly streamlined the
effort on the part of faculty to make course materials available to the
student. The system provides through a single URL the student's access to
course materials, discussion forums, virtual groups and chat, testing, grades,
and electronic communication.
This study is qualitative and confined to University
of Illinois at Chicago graduate and undergraduate students. The three opinion
types identified through this study, however, correlate closely with results
reported in the literature. All three groups of students, representing the
three opinion types, shared a belief in the importance of being able to work
at home. The studies of Richards and Ridley [9] and Hiltz [10] described
flexibility and convenience as both reasons students enrolled in online
courses and as the perception of students once enrolled. On the other hand,
all three groups of students thought unimportant the need to pay home phone
bills incurred in online education, whereas Bee [13] found that students felt
the university should provide financial assistance to offset the associated
costs of going online. There is evidence in the literature (viz., studies by
Guernsey [8] and Larson [18]) that support the opinion identified in this
study of the need by some students for face-to-face interaction. Since none of
the students taking the Q-sort had ever taken an online course, they were
unaware of the opportunities provided by technology [8,10] to potentially
increase individual attention from instructors above that normal in
face-to-face course offerings. Since no post-enrollment Q-sorts were
administered, there was no way to tell whether students continued to hold that
opinion, or whether that opinion has changed. It is anticipated that even if
the Q-set were administered to a larger number of students, similar viewpoints
would still emerge.
The authors wondered whether there was an association
between the opinion set held by the student and his or her learning style.
Preliminary data using the Canfield Learning Styles Inventory [27] show that
the factor one group--Time and Structure in Learning--exhibited a much higher
than expected proportion of independent learners. (74% of the students who had
high factor loadings on factor one were also classified as independent
learners. This difference was significant Z = 3.00, p < .025.) One might be
tempted to hypothesize a relationship between being an independent learner and
having the time and structure opinion of technology and education. Similarly,
one might also expect that individuals who had high factor loadings for factor
two (Social Factors in Learning) would be more likely classified as social
learners. Further research is necessary to understand how learning styles
contribute to the experience of online education.
There is a movement in both education and business to
harness the power of the World Wide Web to disseminate information. Educators
and researchers, aware of this technological paradigm shift, must become
invested in understanding the interactions of students and computing. The
field of human-computer interface design, as applied to interaction of
students in online courses, is ripe for research in the area of building
better virtual learning communities (thus addressing the needs of the social
learner) without overwhelming the ability of the independent learner to excel
on his or her own.
Learning and Teaching Styles (Australia) --- http://library.trinity.wa.edu.au/teaching/styles.htm
Online Learning Styles --- http://www.metamath.com/lsweb/dvclearn.htm
Adapting a Course to Different Learning Styles --- http://www.glue.umd.edu/~jpaol/ASA/
FasTrak Consulting --- http://www.fastrak-consulting.co.uk/tactix/features/lngstyle/style04.htm
VARK Questionnaire --- http://www.vark-learn.com/english/page.asp?p=questionnaire
Selected
professors --- http://online.sfsu.edu/~bjblecha/cai/cais00.htm
JCU Study Skills --- http://www.jcu.edu.au/studying/services/studyskills/learningst/
Cross-Cultural Considerations --- http://faculty.trinity.edu/rjensen/cultures/culture.htm
"How Do People Learn," Sloan-C Review, February 2004 ---
http://www.aln.org/publications/view/v3n2/coverv3n2.htm
Like some of the
other well known cognitive and affective taxonomies, the Kolb figure
illustrates a range of interrelated learning activities and styles beneficial
to novices and experts. Designed to emphasize reflection on learners’
experiences, and progressive conceptualization and active experimentation,
this kind of environment is congruent with the aim of lifelong learning. Randy
Garrison points out that:
From a content
perspective, the key is not to inundate students with information. The first
responsibility of the teacher or content expert is to identify the central
idea and have students reflect upon and share their conceptions. Students
need to be hooked on a big idea if learners are to be motivated to be
reflective and self-directed in constructing meaning. Inundating learners
with information is discouraging and is not consistent with higher order
learning . . . Inappropriate assessment and excessive information will
seriously undermine reflection and the effectiveness of asynchronous
learning.
Reflection on a big
question is amplified when it enters collaborative inquiry, as multiple styles
and approaches interact to respond to the challenge and create solutions. In
How People Learn: Brain, Mind, Experience, and School, John Bransford and
colleagues describe a legacy cycle for collaborative inquiry, depicted in a
figure by Vanderbilt University researchers (see image, lower left).
Continued in the article
Bob Jensen has some related (oft
neglected) comments about learning at http://faculty.trinity.edu/rjensen/265wp.htm
You can read more about online and asynchronous learning at http://faculty.trinity.edu/rjensen/255wp.htm
"Improve your Course Evaluations by having your Class Write Letters to
Future Students," by Brian Croxall, Chronicle of Higher Education,
April 23, 2013 ---
Click Here
http://chronicle.com/blogs/profhacker/improve-your-course-evaluations-by-having-your-class-write-letters-to-future-students/48659?cid=wc&utm_source=wc&utm_medium=en
Jensen Comment
A common problem with teaching evaluations is that students tend to be in a rush
when filling them out. It might help to ask students to write the course
advisory letters (see article above) in advance. Appeal to them to take these
letters seriously.
"Study: Little Difference in Learning in Online and In-Class Science
Courses," Inside Higher Ed, October 22, 2012 ---
http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses
A
study in Colorado has found little difference in
the learning of students in online or in-person introductory science
courses. The study tracked community college students who took science
courses online and in traditional classes, and who then went on to four-year
universities in the state. Upon transferring, the students in the two groups
performed equally well. Some science faculty members have expressed
skepticism about the ability of online students in science, due to the lack
of group laboratory opportunities, but the programs in Colorado work with
companies to provide home kits so that online students can have a lab
experience.
Jensen Comment
Firstly, note that online courses are not necessarily mass education (MOOC)
styled courses. The student-student and student-faculty interactions can be
greater online than onsite. For example, my daughter's introductory chemistry
class at the University of Texas had over 600 students. On the date of the final
examination he'd never met her and had zero control over her final grade. On the
other hand, her microbiology instructor in a graduate course at the University
of Maine became her husband over 20 years ago.
Another factor is networking. For example, Harvard Business School students
meeting face-to-face in courses bond in life-long networks that may be stronger
than for students who've never established networks via classes, dining halls,
volley ball games, softball games, rowing on the Charles River, etc. There's
more to lerning than is typically tested in competency examinations.
My point is that there are many externalities to both onsite and online
learning. And concluding that there's "little difference in learning" depends
upon what you mean by learning. The SCALE experiments at the University of
Illinois found that students having the same instructor tended to do slightly
better than onsite students. This is partly because there are fewer logistical
time wasters in online learning. The effect becomes larger for off-campus
students where commuting time (as in Mexico City) can take hours going to and
from campus.
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
Assessment
Takes Center Stage in Online Learning:
The Saga of Western Governors University
Western Governors University was formed
by the Governors of 11 Western states in the United States and was later joined
by Indiana and Simon Fraser University in Canada. WGU attempted several
business models, including attempts to broker courses from leading state
universities and community colleges as well as a partnership with the North
American branch of U.K.'s Open University. All business models to date
have been disappointments and online enrollments are almost negligible to
date. WGU has nevertheless survived to date with tax-dollar funding from
the founding states. The WGU homepage is at http://www.wgu.edu/wgu/index.html
One unique aspect of WGU is its
dedication to competency-based assessment (administered to date by Slvan
Systems). An important article on this is entitled "Assessment Takes
Center Stage in Online Learning: Distance educators see the need to prove
that they teach effectively," by Dan Carnevale, The Chronicle of Higher
Education, April 13, 2001 --- http://www.chronicle.com/free/v47/i31/31a04301.htm
Students at Western
Governors University aren't required to take any courses. To earn a degree,
they must pass a series of assessment exams. The faculty members don't teach
anything, at least not in the traditional sense. Instead, they serve as
mentors, figuring out what students already know and what courses they need to
take to pass the exams.
Assessment also plays
a big role at the University of Phoenix Online. In a system modeled after the
university's highly successful classroom offerings, students are grouped
together in courses throughout an entire degree program, and they are given
batteries of exams both before and after the program. The tests enable the
university to measure exactly how much the students have learned, and to
evaluate the courses.
Indeed, assessment is
taking center stage as online educators experiment with new ways of teaching
and proving that they're teaching effectively.
And traditional
institutions, some observers say, should start taking notes.
Education researchers
caution that distance educators are still in the process of proving that they
can accurately assess anything, and that comparatively few distance-education
programs are actually participating in the development of new testing
strategies.
One difference
between assessment in classrooms and in distance education is that
distance-education programs are largely geared toward students who are already
in the workforce, which often involves learning by doing. In many of the
programs, students complete projects to show they not only understand what
they've learned but also can apply it -- a focus of many assessment policies.
In addition to such
projects, standardized tests are a key part of assessments in distance
education. These tests are usually administered online in proctored
environments, such as in a student's hometown community college.
Western Governors and
the University of Phoenix Online are among the most visible institutions
creating assessment methods, but they are not alone. Many other
distance-education programs use some form of outcomes-based assessment tests,
including Excelsior College (formerly Regents College), in Albany, N.Y.;
Pennsylvania State University's World Campus; Thomas Edison State College, in
Trenton, N.J.; the State University of New York's Empire State College; and
University of Maryland University College.
All of higher
education is moving toward outcomes-based assessments, with online education
leading the way, says Peter Ewell, senior associate at the National Center for
Higher Education Management Systems. The push for new assessment models in
online education comes largely from competition with its older brother,
traditional education, says Mr. Ewell. Because distance education is
comparatively new, he says, critics often hold it to a higher standard than
traditional education when judging quality. It has more to prove, and is
trying to use assessments that show its effectiveness as the proof.
Online education is
only one of several influences putting pressure on traditional education to do
more to assess the quality of courses. Accreditation agencies, state
governments, and policy boards are all heading toward an inevitable question,
Mr. Ewell says: How much bang for the buck is higher education putting out?
But Perry Robinson,
deputy director of higher education at the American Federation of Teachers,
says assessment exams shift the emphasis away from what he considers the most
important element of learning: student interaction with professors in a
classroom.
The federation has
been critical of distance learning in the past, saying an undergraduate degree
should always include a face-to-face component. Mr. Perry says having degrees
that rely on students' passing tests reduces higher education to nothing more
than job training.
Also, Mr. Perry
doesn't want to see the role of the professor diminished, because that person
knows the material the best and works with the students day after day.
"Assessment is involved in the classroom when you engage the students and
see the look of befuddlement on their faces," he says.
But Peggy L. Maki,
director of assessment at the American Association for Higher Education
believes that all of higher education will move toward a system of assessing
outcomes for students. Although distance education is contributing to this
movement, it isn't the biggest factor, she says. "We're talking about a
cultural change."
Some of this change
is prompted by the demands of legislators and other policy makers, Ms. Maki
says. Also, institutions are feeling pressure from peers to create
outcomes-assessment models. "I think there have been more challenges with
people saying, 'Can you really do this?'" she says. "When they do,
others say, 'Well, we better follow suit.'"
But traditional and
distance-education institutions alike are struggling to figure out how to use
the the results of assessment examinations to create programs and even
budgets. "This is the hardest part of the assessment process -- how you
use the results," Ms. Maki says.
Western Governors
University's assessment system is intended to measure the students' competency
in specific subjects. Because it doesn't matter to W.G.U. whether the students
learned the material on their own or from courses they've taken through the
university, the entire degree revolves around the assessment tests.
The university
doesn't create its own courses. Instead, it forms partnerships with other
universities around the country that have created online courses in various
subjects. A student seeking a degree must show competency in a number of
"domains." These include general education, such as writing and
mathematics, and domains specific to the subject, such as business management.
Western Governors
officials create some of their own assessment examinations and buy some from
other organizations, such as the ACT and the Educational Testing Service.
For W.G.U.'s own
exams, experts from the professional and academic arenas collaborate to
determine what students need to demonstrate to prove they are competent in a
field. Unlike traditional colleges, Western Governors separates assessment
from learning. The professors who grade the assessment exams have not had any
prior interaction with the student.
For the rest of
the article, go to http://www.chronicle.com/free/v47/i31/31a04301.htm
Update Message from Syllabus News on February 5,
2002
Western Governors University
Meeting Access Goals
The Western Governors
University released its annual report, which said the private, non-profit
university, founded by 19 western governors, is achieving its goals to expand
access to higher education, especially for working adults. WGU President Bob
Mendenhall said, "the constraints on time due to work and family
commitments are access issues ... so the flexibility provided by WGU's online,
competency-based model is very appealing to a broad spectrum of
students." WGU currently has about 2,500 students enrolled, up from 500
students one year ago. The average WGU student is 40 years old, and over 90
percent work full-time.
For more
information, visit: http://www.wgu.edu
ALSO SEE:
Three sample assessment questions from Western
Governors University in the area of quantitative reasoning, and the answers.
Western Governors University ---
http://en.wikipedia.org/wiki/Western_Governors_University
Instructors do not assign the grades in this successful "competency-based
testing university
A President Brings a Revolutionary University to Prominence," by
Goldie Blumenstyk, Chronicle of Higher Education, February 26, 2012 ---
http://chronicle.com/article/A-President-Brings-a/130915/?sid=wc&utm_source=wc&utm_medium=en
Western Governors University,
first conceived in 1995, embodied an idea that was
ahead of its time. And early in its life, that showed.
Traditional accreditors resisted its model: an all-online, competency-based
institution. Experts scoffed at its grandiose promises to reshape higher
education. Students, unmoved by its founders' ambitious early enrollment
projections, mostly stayed away.
Yet a Utah technology entrepreneur named Robert W.
Mendenhall, who had been asked to kick-start the venture a few years into
its existence, says he never doubted. "It took me about 30 seconds to decide
I would do it," says Mr. Mendenhall, WGU's president since 1999. "I was
always confident that we'd pull it off. The idea made so much sense."
Today the unusual institution has drawn growing
notice from national mainstream news media and at meetings on college
affordability by both the U.S. Senate and President Obama. It has a growing
student body of more than 25,000 students.
Mr. Mendenhall, now 57, came to WGU when it had no
students and no degrees. "The vision of it was just coagulating," recalls
Michael O. Leavitt, the former Utah governor who was instrumental in the
institution's founding and in Mr. Mendenhall's hiring.
With his know-how for building start-up businesses,
a practical willingness to shed time-consuming and unpromising components
(like a plan to run an online catalog of online courses from other
institutions), and what Mr. Leavitt calls a determined "sense of mission"
for low-cost, competency-based higher education, Mr. Mendenhall kept the
nonprofit institution moving.
Internally, he was an "in your face" presence, a
colleague says, while externally, thanks in no small part to the political
backing of 19 governors, he pulled the strings that would eventually land
WGU millions in federal grants to develop its online programs and its
distinguishing proficiency exams by which students progress toward a degree,
and millions more from the Lumina Foundation to create what would become its
turning point, a teachers' college.
Continued in article
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
From PublicationsShare.com --- http://publicationshare.com/
Free Downloadable
Reports from CourseShare:
- CourseShare.com
researchers have collected online survey data from both college faculty
and corporate trainers regarding their online learning needs and supports.
Both surveys were co-sponsored by JonesKnowledge.com and CourseShare.com
and are available below:
Bonk, C. J.
(2002). Online Training in an Online World. Bloomington,
IN: CourseShare.com
(Note: Distribution or Reproduction of more than 50 copies of this
report require permission from CourseShare.com or JonesKnowledge.com)
Online
Training in an Online World (Adobe PDF format :: 649 KB)
Executive
Summary Only (Adobe PDF format :: 284 KB)
Bonk, C. J.
(2001). Online Teaching in an Online World. Bloomington,
IN: CourseShare.com
(Note: Distribution or Reproduction of more than 50 copies of this
report require permission from CourseShare.com or JonesKnowledge.com)
Online
Teaching in an Online World (Adobe PDF format :: 308 KB)
Executive
Summary Only (Adobe PDF format :: 68 KB)
- As a Senior
Consortium Research Fellow with the U.S. Army Research Institute (ARI),
Dr. Curt Bonk of CourseShare.com has written the following major report
with Dr. Robert Wisher from ARI that is now available online as well as in
a hardcopy format:
Bonk, C. J., &
Wisher, R. A. (2000). Applying collaborative and e-learning tools
to military distance learning: A research framework. (Technical
Report #1107). Alexandria, VA: U.S. Army Research Institute for the
Behavioral and Social Sciences. (Note: this report has unlimited
distribution.)
Click
to Download PDF of this file.
- Bonk, C. J., &
Cunningham, D. J. (1998). Chapter 2: Searching for learner-centered,
constructivist, and sociocultural components of collaborative educational
learning tools. In C. J. Bonk, & K. S. King (Eds.), Electronic
collaborators: Learner-centered technologies for literacy, apprenticeship,
and discourse (pp. 25-50). Mahwah, NJ: Erlbaum.
Click
to Download PDF of this file.
(Note: Permission to download from this site was granted by the
publisher Lawrence Erlbaum Associates; call LEA at 1-800-9books9 to
order the book)
- Introduction
section to (including a list of contributors) Bonk, C. J., & King, K.
S. (Eds.). (1998). Electronic collaborators: Learner-centered
technologies for literacy, apprenticeship, and discourse. Mahwah,
NJ: Erlbaum. ISBN: 0-8058-2796-X (cloth); 0-8058-2797-8 (paper).
Click
to Download PDF of this file.
For more on the Electronic Collaborator's Book, see: http://php.indiana.edu/~cjbonk/book.html
Table of contents:http://php.indiana.edu/~cjbonk/nbook.html#table
Book Contributors:http://php.indiana.edu/~cjbonk/nbook.html#contributor
(Note: Permission to download from this site was granted by the
publisher Lawrence Erlbaum Associates; call LEA at 1-800-9books9 to
order the book)
Bob Jensen's threads on education
technologies are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
From Distance Education and Its
Challenges: An Overview, by D.G. Oblinger, C.A. Barone, and B.L.
Hawkins (ACE, American Council on Education Center for Policy Analysis and
Educause, 2001, pp. 39-40.) http://www.acenet.edu/bookstore/pdf/distributed-learning/distributed-learning-01.pdf
Appendix 4
Measures of Quality in Internet-Based Distance Learning
With the worldwide growth of distributed learning, attention is being
paid to the nature and quality of online higher education.
Twenty-four bench marks were identified in a study conducted by the
Institute for Higher Education Policy. To formulate the
benchmarks, the report identified firsthand, practical strategies being
used by U.S. colleges and universities considered to be leaders in
online distributed learning. The benchmarks were divided into
seven categories of quality measures.
Institutional Support Benchmarks
1.A documented technology plan includes electronic security
measures to ensure both quality standards and the integrity and
validity of information.
2.The reliability of the technology delivery system is as close to
failsafe as possible.
3.A centralized system provides support for building and
maintaining the distance education infrastructure.
Course Development Benchmarks
4.Guidelines regarding minimum standards are used for course
development, design, and delivery, while learning outcomes —not the
availability of existing technology — determine the technology being
used to deliver course content.
5.Instructional materials are reviewed periodically to ensure that
they meet program standards.
6.Courses are designed to require students to engage themselves in
analysis, synthesis, and evaluation as part of their course and
program requirements.
Teaching/Learning Benchmarks
7.Student interaction with faculty and other students is essential
and is facilitated through a variety of ways, including voice mail
and/or email.
8.Feedback to student assignments and questions is constructive and
provided in a timely manner.
9.Students are instructed in the proper methods of effective
research, including assessment of the validity of resources.
Course Structure Benchmarks
10.Before starting an online program, students are advised about
the program to determine if they possess the self motivation and
commitment to learn at a distance and if they have access to the
minimal technology required by the course design.
11.Students are provided with supplemental information that
outlines course objectives, concepts, and ideas, and learning outcomes
for each course are summarized in a clearly written, straightforward
statement.
12.Students have access to sufficient library resources that may
include a “virtual library ”accessible through the web.
13. Faculty and students agree on an accept- able length of time
for student assignment completion and faculty response.
Student Support Benchmarks
14.Students receive information about programs including admission
requirements, tuition and fees, books and supplies ,technical and
proctoring requirements, and student support services.
15.Students are provided with hands-on training and information to
aid them in securing material through electronic databases,
inter-library loans, government archives, news services, and other
sources.
16.Throughout the duration of the course/program, students have
access to technical assistance, including detailed instructions
regarding the electronic media used, practice sessions prior to the
beginning of the course, and convenient access to technical support
staff.
17.Questions directed to student service personnel are answered
accurately and quickly, with a structured system in place to address
student complaints.
Faculty Support Benchmarks
18.Technical assistance in course development is available to
faculty, who are encouraged to use it.
19.Faculty members are assisted in the transition from classroom
teaching to online instruction and are assessed during the process.
20.Instructor training and assistance,including peer mentoring,
continues through the progression of the online course.
21.Faculty members are provided with written resources to deal with
issues arising from student use of electronically accessed data.
Evaluation and Assessment Benchmarks
22.The program ’s educational effectiveness and teaching/learning
process is assessed through an evaluation process that uses sev- eral
methods and applies specific standards.
23.Data on enrollment,costs,and successful/innovative uses of
technology are used to evaluate program effectiveness.
24.Intended learning outcomes are regularly reviewed to ensure
clarity,utility,and appropriateness.
|
"The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning," (Not Free),
Chronicle of Higher Education, November 2011 ---
https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc
The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning.
As online learning spreads throughout higher
education, so have calls for quality control and assessment. Accrediting
groups are scrambling to keep up, and Congress and government officials
continue to scrutinize the high student-loan default rates and aggressive
recruiting tactics of some for-profit, mostly online colleges. But the push
for accountability isn't coming just from outside. More colleges are looking
inward, conducting their own self-examinations into what works and what
doesn't.
Also in this year's report:
- Strategies for teaching and doing research
online
- Members of the U.S. military are taking online
courses while serving in Afghanistan
- Community colleges are using online technology
to keep an eye on at-risk students and help them understand their own
learning style
- The push to determine what students learn
online, not just how much time they spend in class
- Presidents' views on e-learning
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on online course and degree programs ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
"Making Assessment Work," by Kaplan University, Chronicle of Higher
Education, November 4, 2011 ---
http://chronicle.com/article/Making-Assessment-Work/129266/
Accreditors are increasingly requiring assessment
of student learning to become a focus for post-secondary institutions. The
increased importance placed on assessment is not without good reason.
Student learning is an important outcome of higher education. With
increasing accreditation and public pressure, student learning should be
more important to colleges and universities than it ever has. What is
important should be measured and what is measured can be improved.
Case in point, Kaplan University (KU) is a
for-profit, career oriented university where learning is not just one of the
important outcomes it is the most important outcome. More specifically,
Kaplan University’s focus is student learning that will materialize into
positive career outcomes for its students. With this mission in mind, Kaplan
University spent four years planning, developing and implementing Course
Level Assessment (CLA), a system specifically designed to close the loop
between measurement and improved student outcomes.
CLA is multi-tiered assessment system mapping
course level learning goals to program level learning goals. Each of the
1,000 courses contains an average of four to six learning goals that map to
one or more of the program learning objectives. Assessment against these
outcomes is comprehensive; every outcome is assessed for every student,
every term in every course. The Learning outcomes and scoring rubrics that
appear in the online grade book all come from a common data repository. The
instructor scores the assessment directly in the online gradebook and the
data automatically feed back into the data repository. By linking those
objectives, rubrics, and assessment data, we can compare student achievement
on any specific objective for a course across any number of instructors,
sections, or terms with the confidence that the same assessment was used,
addressing the same learning objective, graded with the same rubric.
The data mapping enables rapid and sophisticated
analytics that supports a tight feedback loop. Another design element of CLA
that enhances a short feedback cycle is the proximity of the assessment to
the learning event. This is a key differentiator of Kaplan’s CLA. While
other strategies can produce reliable evidence of student learning, they are
far removed from the actual learning to pin-point any specific deficiency in
curriculum or instruction. By combining assessments linked directly to
specific learning and automated data analytics, CLA provides a platform to
rapidly test and improve curriculum whether on-ground or on-line.
With the technology foundation for CLA fully in
place, KU evaluated curricular changes in 221 courses with assessment data.
The results showed that 44% of the revisions produced statistically
significant improvements while only 23% led to decreases. The CLA system is
the cornerstone of all programs to analyze these interventions and make
evidence based decisions about course offerings that drive student outcomes.
Continued in article
MITx Open Sharing Wonder
"MIT Mints a Valuable New Form of Academic Currency," by Kevin Carey,
Chronicle of Higher Education, January 22, 2012 ---
http://chronicle.com/article/MIT-Mints-a-Valuable-New-Form/130410/?sid=wc&utm_source=wc&utm_medium=en
The Massachusetts Institute of Technology has
invented or improved many world-changing things—radar, information theory,
and synthetic self-replicating molecules, to name a few. Last month the
university announced, to mild fanfare, an invention that could be similarly
transformative, this time for higher education itself. It's called MITx. In
that small lowercase letter, a great deal is contained.
MITx is the next big step in the
open-educational-resources movement that MIT helped start in 2001, when it
began putting its course lecture notes, videos, and exams online, where
anyone in the world could use them at no cost. The project exceeded all
expectations—more than 100 million unique visitors have accessed the courses
so far.
Meanwhile, the university experimented with using
online tools to help improve the learning experience for its own students in
Cambridge, Mass. Now MIT has decided to put the two together—free content
and sophisticated online pedagogy—and add a third, crucial ingredient:
credentials. Beginning this spring, students will be able to take free,
online courses offered through the MITx initiative. If they prove they've
learned the material, MITx will, for a small fee, give them a credential
certifying as much.
In doing this, MIT has cracked one of the
fundamental problems retarding the growth of free online higher education as
a force for human progress. The Internet is a very different environment
than the traditional on-campus classroom. Students and employers are rightly
wary of the quality of online courses. And even if the courses are great,
they have limited value without some kind of credential to back them up.
It's not enough to learn something—you have to be able to prove to other
people that you've learned it.
The best way to solve that problem is for a
world-famous university with an unimpeachable reputation to put its brand
and credibility behind open-education resources and credentials to match.
But most world-famous universities got that way through a process of
exclusion. Their degrees are coveted and valuable precisely because they're
expensive and hard to acquire. If an Ivy League university starts giving
degrees away for free, why would everyone clamor to be admitted to an Ivy
League university?
MIT is particularly well suited to manage that
dilemma. Compared with other elite universities, MIT has an undergraduate
admissions process that is relatively uncorrupted by considerations of who
your grandfather was, the size of the check your parents wrote to the
endowment, or your skill in moving a ball from one part of a playing field
to another. Also in marked contrast to other (in some cases highly
proximate) elite institutions, MIT undergraduates have to complete a
rigorous academic curriculum to earn a degree. This means there should be
little confusion between credentials issued by MIT and MITx. The latter
won't dilute the value of the former.
MIT is also populated by academic leaders with the
better traits of the engineer: a curiosity about how things work and an
attraction to logical solutions. So MITx will be accompanied by a campuswide
research effort aimed at discovering what kinds of online learning tools,
like simulation laboratories and virtual-learning communities, are most
effective in different combinations of subject matter and student
background. MITx courses will also be delivered on an "open learning
platform," which means that any other college or higher-education provider
will be able to make its course available through the same system.
The university is fortunate to have faculty who are
comfortable working with technological tools and eager to try out new
educational methods. Professors in the MIT Computer Science and Artificial
Intelligence Laboratory (Csail) are already experimenting with ideas like "crowdsourced"
grading of computer programs, in which qualified Web users comment on
student work. MIT also plans to retool its lecture videos to make them
interactive and responsive to students' academic progress. Anant Agarwal,
director of Csail and a leader of the MITx effort, notes that "human
productivity has gone up dramatically in the past several decades due to the
Internet and computing technologies, but amazingly enough the way we do
education is not very different from the way we did it a thousand years
ago."
Most important, MITx is animated by a sense of
obligation to maximize human potential. Great research universities have
vast abilities to distribute knowledge across the globe. But until recently,
they have been highly limited in their ability, and willingness, to
distribute authentic education. Before the information-technology
revolution, the constraints were physical—you can fit only so many people in
dorms and classrooms along the Charles River.
The Internet has ripped those barriers away. As
MIT's provost, L. Rafael Reif, observes, "There are many, many learners
worldwide—and even here in the United States—for whom the Internet is their
only option for accessing higher education." Reif emphasizes that the
courses will be built with MIT-grade difficulty. Not everyone will be able
to pass them. But, he says, "we believe strongly that anyone in the world
who can dedicate themselves and learn this material should be given a
credential."
This sensible and profound instinct sets a new
standard for behavior among wealthy, famous universities. Elite colleges all
allege to be global institutions, and many are known around the world. But
it is simply untenable to claim global leadership in educating a planet of
seven billion people when you hoard your educational offerings for a few
thousand fortunates living together on a small patch of land.
Continued in article
College diplomas might be obtained in three different scenarios:
- Traditional College Courses
Students take onsite or online courses that are graded by their instructors.
- Competency-Based College Courses
Students take onsite or online courses and are then given competency-based
examinations.
Examples include the increasingly popular Western Governors University and
the Canada's Chartered Accountancy School of Business (CASB).
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
- Competency-Based College Courses That Never Meet or Rarely Meet
Students might study from course materials and videos in classes that do not
meet or rarely meet with instructors.
In the 1900s the University of Chicago gave degrees to students who took
only examinations to pass courses.
In current times BYU teaches the first two accounting courses from variable
speed video disks and then administers competency-based examinations.
The University of New Hampshire now is in the process of developing a degree
program for students who only competency-based examinations to pass courses.
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#NoInstructors
Recently, there are increasingly popular certificates of online "attendance"
in courses that do not constitute college credits toward diplomas. MIT is
providing increasingly popular certificates ---
"Will MITx Disrupt Higher Education?" by Robert Talbert, Chronicle of
Higher Education, December 20, 2011 ---
http://chronicle.com/blognetwork/castingoutnines/2011/12/20/will-mitx-disrupt-higher-education/?sid=wc&utm_source=wc&utm_medium=en
There are no admission requirements or prerequisites to enroll in these online
courses. Presumably the only tests of competency might be written or oral
examinations of potential employers. For example, if knowledge of Bessel
Functions is required on the job, a potential employer might determine in one
way or another that the student has a competency in Bessel Functions ---
http://en.wikipedia.org/wiki/Bessel_Functions
In all the above instances, a student's transcript is based upon course
content whether or not the student takes courses and/or competency-based
examinations in the content of those courses.
StraighterLine's new certificates based upon "Critical-Thinking Tests" is an
entirely different concept. Presumably the certificates no longer are rooted
on knowledge of content. Rather these are certificates based upon critical
thinking skills in selected basic courses such as a writing skills course.
Critical Thinking Badges for Brains That Do Not Have Course Content
Competency
"Online Course Provider, StraighterLine, to Offer Critical-Thinking Tests to
Students," by Jeff Selingo, Chronicle of Higher Education, January
19, 2012 ---
Click Here
http://chronicle.com/blogs/wiredcampus/online-course-provider-straighterline-to-offer-critical-thinking-tests-to-students/35092?sid=at&utm_source=at&utm_medium=en
In my opinion these will be a much harder sell in the market. Whereas a
potential employer can assess whether an applicant has the requisite skills in
something like Bessel Functions, how does an employer or college admissions
officer verify that StraightLine's "Critical-Thinking Tests" are worth a diddly
crap and, if so, what does passing such tests mean in terms of job skills?
Thus far I'm not impressed with Critical Thinking Certificates unless they
are also rooted on course content apart from "thinking" alone.
Bob Jensen's threads on the BYU Variable Speed Video Courses ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm
Bob Jensen's threads on open sharing courses. lectures, videos, tutorials,
and course materials from prestigious universities ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
Bob Jensen's threads on online training and education alternatives ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
Bob Jensen's threads on higher education controversies ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"Far From Honorable," by Steve Kolowich, Inside Higher Ed,
October 25, 2011 ---
http://www.insidehighered.com/news/2011/10/25/online-students-might-feel-less-accountable-honor-codes
Much of the urgency around creating a “sense of
community” in online courses springs from a desire to keep online students
from dropping out. But a recent paper suggests that strengthening a sense of
social belonging among online students might help universities fight another
problem: cheating.
In a series of experiments, researchers at Ohio
University found that students in fully online psychology courses who signed
an honor code promising not to cheat broke that pledge at a significantly
higher rate than did students in a “blended” course that took place
primarily in a classroom.
“The more distant students are, the more
disconnected they feel, and the more likely it is that they’ll rationalize
cheating,” Frank M. LoSchiavo, one of the authors, conjectured in an
interview with Inside Higher Ed.
While acknowledging the limitations inherent to a
study with such a narrow sample, and the fact that motivations are
particularly hard to pin down when it comes to cheating, LoSchiavo and Mark
A. Shatz, both psychology professors at Ohio University's Zanesville campus,
said their findings may indicate that meeting face-to-face with peers and
professors confers a stronger sense of accountability among students. “Honor
codes,” LoSchiavo said, “are more effective when there are [strong] social
connections.”
Honor codes are not, of course, the only method of
deterring cheating in online courses. The proliferation of online programs
has given rise to a
cottage industry of
remote proctoring technology, including one product that takes periodic
fingerprint readings while monitoring a student’s test-taking environment
with a 360-degree camera. (A 2010 survey by the Campus Computing Project
suggests that a minority of institutions authenticate the identities of
online students as a rule.)
But LoSchiavo said that he and Shatz were more
interested in finding out whether honor codes held any sway online. If so,
then online instructors might add pledges to their arsenal of anti-cheating
tools, LoSchiavo said. If not, it provides yet an intriguing contribution to
the discussion about student engagement and “perceived social distance” in
the online environment.
They experimented with the effectiveness of honor
codes in three introductory psychology courses at Ohio University. The first
course had 40 students and was completely online. These students, like those
in subsequent trials, were a mix of traditional-age and adult students,
mostly from regional campuses in the Ohio University system. There was no
honor code. Over the course of the term, the students took 14
multiple-choice quizzes with no proctoring of any kind. At the end of the
term, 73 percent of the students admitted to cheating on at least one of
them.
The second trial involved another fully online
introductory course in the same subject. LoSchiavo and Shatz divided the
class evenly into two groups of 42 students, and imposed an honor code --
posted online with the other course materials -- to one group but not the
other. The students “digitally signed the code during the first week of the
term, prior to completing any assignments.” The definition of cheating was
the same as in the first trial: no notes, no textbooks, no Internet, no
family or friends. There was no significant difference in the self-reported
cheating between the two groups.
In a third trial, the professors repeated the
experiment with 165 undergraduates in a “blended” course, where only 20
percent of the course was administered online and 80 percent in a
traditional classroom setting. Again, they split the students into two
groups: one in which they were asked to sign an honor code, and another in
which they were not.
This time, when LoSchiavo and Shatz surveyed the
students at the end of the term, there was a significant difference:
Students who promised not to cheat were about 25 percent less likely to
cheat than were those who made no such promise. Among the students who had
not signed the code, 82 percent admitted to cheating.
LoSchiavo concedes that this study offers no
definitive answers on the question of whether students are more likely to
cheat in fully online courses. Cheating is more often than not a crime of
opportunity, and containing integrity violations probably has much more to
do with designing a system that limits the opportunities to cheat and gives
relatively little weight to those assignments for which cheating is hardest
to police.
“The bottom line is that if there are
opportunities, students will cheat,” he said. “And the more opportunities
they have, the more cheating there will be, and it is incumbent upon
professors to put in a system that, when it’s important, cheating will be
contained.”
Continued in article
Jensen Comment
I think universities like Trinity University that expanded their honor codes to
include student courts are generally happy with the operations of those honor
codes. However, Trinity has only full time students and no distance education
courses.
One thing that I hated giving up was grading control. For most of my teaching
career I gave F grades to students who seriously cheated in my courses. Under
the revised Trinity Honor Code, instructors can no longer control the granting
of F grades for cheating.
When I was a student at Stanford the Honor Code included a pledge to report
cheating of other students. I think most universities have watered down this
aspect of their honor codes because, in this greatly increased era of
litigation, student whistle blowers can be sued big time. Universities may
continue to encourage such whistle blowing, but they no longer make students
sign pledges that on their honor they will be whistleblowers if they do not want
to bear the risk of litigation by students they report.
Reporting Assessment Data is No Big Deal for
For-Profit Learning Institutions
"What Took You So Long?" by Doug Lederman, Inside Higher Ed, June 15,
2007 ---
http://www.insidehighered.com/news/2007/06/15/cca
You’d
have been hard pressed to attend a major higher education
conference over the last year where the work of the
Secretary of Education’s
Commission on the Future of Higher Education
and the U.S. Education Department’s
efforts to carry it out were not discussed. And they were
rarely mentioned in the politest of terms, with faculty
members, private college presidents, and others often
bemoaning proposals aimed at ensuring that colleges better
measure the learning outcomes of their students and that
they do so in more readily comparable ways.
The annual
meeting of the Career College Association, which represents
1,400 mostly for-profit and career-oriented colleges,
featured its own panel session Thursday on Education
Secretary Margaret Spellings’ various “higher education
initiatives,” and it had a very different feel from
comparable discussions at meetings of public and private
nonprofit colleges. The basic theme of the panelists and the
for-profit college leaders in the audience at the New
Orleans meeting was: “What’s the big deal? The government’s
been holding us accountable for years. Deal with it.”
Ronald S.
Blumenthal, vice president for operations and senior vice
president for administration at Kaplan Higher Education, who
moderated the panel, noted that the department’s push for
some greater standardization of how colleges measure the
learning and outcomes of their students is old hat for
institutions that are accredited by “national” rather than
“regional” accreditors, as most for-profit colleges are. For
nearly 15 years, ever since the Higher Education Act was
renewed in 1992, national accreditors have required
institutions to report placement rates and other data, and
institutions that perform poorly compared to their peers
risk losing accreditation.
“These are
patterns that we’ve been used to for more than 10 years,”
said Blumenthal, who participated on the Education
Department negotiating panel that considered possible
changes this spring in federal rules governing
accreditation. “But the more traditional schools have not
done anything like that, and they don’t want to. They say
it’s too much work, and they don’t have the infrastructure.
We had to implement it, and we did did implement it. So what
if it’s more work?,” he said, to nods from many in the
audience.
Geri S.
Malandra of the University of Texas System, another member
of the accreditation negotiating team and a close adviser to
Charles Miller, who headed the Spellings Commission and
still counsels department leaders, said that nonprofit
college officials (and the news media, she suggested) often
mischaracterized the objectives of the commission and
department officials as excessive standardization.
“Nobody was
ever saying, there is one graduation rate for everyone
regardless of the program,” Malandra said. “You figure out
for your sector what makes sense as the baseline. No matter
how that’s explained, and by whom, the education secretary
or me, it still gets heard as one-size-fits-all, a single
number, a ‘bright line’ ” standard. “I don’t think it was
ever intended that way.”
The third
panelist, Richard Garrett, a senior analyst at Eduventures,
an education research and consulting company, said the lack
of standardized outcomes measures in higher education “can
definitely be a problem” in terms of gauging which
institutions are actually performing well. “It’s easy to
accuse all parts of higher education of having gone too far
down the road of diversity” of missions and measures,
Garrett said.
“On the
other hand,” said Garrett, noting that American colleges
have long been the envy of the world, “U.S. higher education
isn’t the way it is because of standardization. It is as
successful as it is because of diversity and choice and
letting a thousand flowers bloom,” he said, offering a voice
of caution that sounded a lot like what one might have heard
at a meeting of the National Association of Independent
Colleges and Universities or the American Federation of
Teachers.
December 10, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu]
E-LEARNING ONLINE PRESENTATIONS
The University of Calgary Continuing Education
sponsors Best Practices in E-Learning, a website that provides a forum for
anyone working in the field to share their best practices. This month's
presentations include:
-- "To Share or Not To Share: There is No
Question" by Rosina Smith Details a new model for permitting "the
reuse, multipurposing, and repurposing of existing content"
-- "Effective Management of Distributed Online
Educational Content" by Gary Woodill "[R]eviews the history of
online educational content, and argues that the future is in distributed
content learning management systems that can handle a wide diversity of
content types . . . identifies 40 different genres of online educational
content (with links to examples)"
Presentations are in various formats, including
Flash, PDF, HTML, and PowerPoint slides. Registered users can interact with
the presenters and post to various discussion forums on the website. There is
no charge to register and view presentations. You can also subscribe to their
newsletter which announces new presentations each month. (Note: No archive of
past months' presentations appears to be on the website.)
For more information, contact: Rod Corbett, University of Calgary
Continuing Education; tel:403-220-6199 or 866-220-4992 (toll-free); email: rod.corbett@ucalgary.ca
; Web: http://elearn.ucalgary.ca/showcase/.
NEW APPROACHES TO
EVALUATING ONLINE LEARNING
"The clear
implication is that online learning is not good enough and needs to prove its
worth before gaining full acceptance in the pantheon of educational practices.
This comparative frame of reference is specious and irrelevant on several
counts . . ." In "Escaping the Comparison Trap: Evaluating Online
Learning on Its Own Terms (INNOVATE, vol. 1, issue 2, December 2004/January
2005), John Sener writes that, rather than being inferior to classroom
instruction, "[m]any online learning practices have demonstrated superior
results or provided access to learning experiences not previously
possible." He describes new evaluation models that are being used to
judge online learning on its own merits. The paper is available online at http://www.innovateonline.info/index.php?view=article&id=11&action=article.
You will need to
register on the Innovate website to access the paper; there is no charge for
registration and access.
Innovate [ISSN
1552-3233] is a bimonthly, peer-reviewed online periodical published by the
Fischler School of Education and Human Services at Nova Southeastern
University. The journal focuses on the creative use of information technology
(IT) to enhance educational processes in academic, commercial, and government
settings. Readers can comment on articles, share material with colleagues and
friends, and participate in open forums. For more information, contact James
L. Morrison, Editor-in-Chief, Innovate; email: innovate@nova.edu
; Web: http://www.innovateonline.info/.
You might find some helpful information
in the following reference --- http://202.167.121.158/ebooks/distedir/bestkudo.htm
Phillips, V., & Yager, C. The
best distance learning graduate schools: Earning your degree without leaving
home.
This book profiles 195 accredited institutions that offer graduate degrees via
distance learning. Topics include: graduate study, the quality and benefits of
distance education, admission procedures and criteria,
available education delivery systems, as well as accreditation, financial aid,
and school policies.
A review is given at http://distancelearn.about.com/library/weekly/aa022299.htm
Some good assessment advice is given at
http://www.ala.org/acrl/paperhtm/d30.html
A rather neat PowerPoint show from
Brazil is provided at http://www.terena.nl/tnc2000/proceedings/1B/1b2.ppt
(Click on the slides to move forward.)
The following references may be helpful
in terms of evaluation forms:
- Faculty
Course Evaluation Form
University of Bridgeport
- Web-Based
Course Evaluation Form
Nashville State Technology Institute
- Guide
to Evaluation for Distance Educators
University of Idaho Engineering Outreach Program
- Evaluation
in Distance Learning: Course Evaluation
World Bank Global Distance EducatioNet
A Code of Assessment Practice is given
at http://cwis.livjm.ac.uk/umf/vol5/ch1.htm
A comprehensive outcomes assessment
report (for the University of Colorado) is given at http://www.colorado.edu/pba/outcomes/
A Distance Learning Bibliography is
available at http://mason.gmu.edu/~montecin/disedbiblio.htm
Also see "Integration of
Information Resources into Distance Learning Programs" by Sharon M.
Edge and Denzil Edge at http://www.learninghouse.com/pubs_pubs02.htm
"A New Methodology for Evaluation:
The Pedagogical Rating of Online Courses," by Nishikant Sonwalkar, Syllabus
Magazine, January 2002, 18-21 --- http://www.syllabus.com/syllabusmagazine/article.asp?id=5914
This article proposes a means of
numerically evaluating various attributes of an online course and then
aggregating these into an "Overall Rating." Obviously, any model
for this type of aggregation will be highly controversial since there are so
many subjective criteria and so many interactive (nonlinear) complexities that
lead us to doubt and additive aggregation.
The author follows up on two previous
articles in Syllabus Magazine (November and December 2001) a pedagogical
learning cube. This January 2002 article takes a giant leap by aggregating
metrics of six media types, five learning styles, and five types of student
interactions (not to be confused with the model's component interactions).
The pedagogy effectiveness index expressed as a summative rule
I have all sorts complaints about an
additive summation index of components that are hardly independent.
However, I will leave it to the reader to read this article and form his or her
own opinion.
Number Watch: How to Lie With Statistics
Number Watch
This is a link that every professor should look at very,
very seriously and (sigh) skeptically!
Number Watch is a truly fascinating site --- http://www.numberwatch.co.uk/number%20watch.htm
This site is devoted to the monitoring of the
misleading numbers that rain down on us via the media. Whether they are
generated by Single Issue Fanatics (SIFs), politicians, bureaucrats,
quasi-scientists (junk, pseudo- or just bad), such numbers swamp the media,
generating unnecessary alarm and panic. They are seized upon by media, hungry
for eye-catching stories. There is a growing band of people whose livelihoods
depend on creating and maintaining panic. There are also some who are trying to
keep numbers away from your notice and others who hope that you will not make
comparisons. Their stock in trade is the gratuitous lie. The aim here is to nail
just a few of them.
The Scout Report on February 11, 2005 has this to say:
John Brignell, Professor Emeritus from the Department
of Electronics & Computer Science at the University of Southampton, is the
author of this informal website "devoted to the monitoring of the
misleading numbers that rain down on us via the media." Brignell says he
aims to "nail" a few of the "Single Issue Fanatics (SIFs),
politicians, bureaucrats, quasi-scientists (junk, pseudo- or just bad),"
who use misleading numbers to write catchy articles or who try to keep numbers
away from public notice. Since April 2000, he has been posting a "number
of the month" as well as a "number for the year," which offer
his commentary on media usage of misleading numbers and explanations for why
the numbers are misleading. He also posts book reviews and an extensive list
of online resources on statistics and statistics education. The FAQ section
includes answers to some interesting questions, such as "Is there such a
thing as average global temperature?" and some more basic questions such
as "What is the Normal Distribution and what is so normal about it?"
The Bits and Pieces section includes a variety of short articles on statistics
and his definitions for some terms he uses on the website. Visitors are also
invited to join the discussion forum (complete with a few advertisements) and
view comments by others who want to discuss "wrong numbers in science,
politics and the media." A few comments sent to Brignell and his
responses are also posted online. This site is also reviewed in the February
11, 2005_NSDL MET Report.
Jensen Comment:
I'm getting some feedback from respected scientists that the site has
good rules but then breaks its own rules when
applying the rules.
I focused more on the rules themselves and found the site interesting.
One that I liked were the statistics pages such as the one at http://www.numberwatch.co.uk/averages.htm
Alas! Even our statisticians
with good rules lie with statistics. I
guess that alone makes this site interesting from an
educational
standpoint.
Bob
Myanmar's improbable tsunami statistics and the
casualty numbers game.
Kerry Howley, "Disaster Math," ReasonOnline, January 7, 2005
--- http://www.reason.com/links/links010705.shtml
Drop
Out Problems
READINGS ON ONLINE COURSE DROP-OUTS
"Do Online Course Drop-Out Rates
Matter?" presented articles on this topic (CIT INFOBITS, Issue 46, April
2002, http://www.unc.edu/cit/infobits/bitapr02.html#3
). Additional readings include:
"Confessions of an E-Learner: Why
the Course Paradigm is All Wrong," by Eve Drinis and Amy Corrigan,
ONLINELEARNING MAGAZINE, April 3, 2002. http://www.onlinelearningmag.com/onlinelearning/reports_analysis/feature_display.jsp?vnu_content_id=1457218
OnlineLearning Magazine: Innovative
Strategies for Business [ISSN: 1532-0022] is published eleven times a year by
VNU Business Media, Inc., 50 S. Ninth Street, Minneapolis, MN 55402 USA; tel:
612-333-0471; fax: 612-333-6526; email: editor@onlinelearningmag.com; Web:
http://www.onlinelearningmag.com/
"Five Steps For Ensuring
E-Learning Success," by Pete Weaver, American Society for Training &
Development (ASTD) website. http://66.89.55.104/synergy/emailmgmt/moreinfo/moreinfo.cfm?member_id=138902&sponsor_id=367&content_id=1293&b1=194&b2=192&b3=192
American Society for Training &
Development (ASTD) is a professional association concerned with workplace
learning and performance issues. For more information, contact ASTD, 1640 King
Street, Box 1443, Alexandria, VA 22313-2043 USA; tel: 703-683-8100 or
800-628-2783; fax: 703-683-1523; Web: http://www.astd.org/
Question
Who will stick it out and who will drop out of a distance education course?
Answer
See http://www.usdla.org/html/journal/JAN03_Issue/article06.html
(Includes a Literature Review)
Hypotheses
This study had two
hypotheses:
- Locus of
control, as measured by the Rotter's Locus of Control scale, is a
significant predictor of academic persistence.
- Locus of
control scores increase, moved toward internality, over the course of a
semester for students enrolled in web-based instruction.
Accreditation Issues
Testimony by Sylvia Manning President, Higher Learning Commission , North
Central Association of Colleges and Schools
Senate Committee o n Health, Education, Labor and Pensions
March 10, 2011
http://www.help.senate.gov/imo/media/doc/Manning.pdf
"No Surprise: Accrediting Agency Opts To Stunt Innovation," by Michael
Horn, Forbes, August 8, 2013 ---
http://www.forbes.com/sites/michaelhorn/2013/08/08/no-surprise-accrediting-agency-opts-to-stunt-innovation/
"Innovation vs. Gatekeeping," by Doug Lederman, Inside Higher Ed,
November 11, 2014 ---
https://www.insidehighered.com/news/2014/11/11/one-way-permit-federal-funding-new-postsecondary-institutions
The tension between promoting innovation and new
approaches on the one hand and protecting academic quality and federal
financial aid funds on the other is at the core of many major issues in
higher education -- not the least of which is the accreditation system. The
system of peer-reviewed quality assurance is
frequently attacked as a brake on progress and
competition in American higher education, even as others criticize it for
going too soft on institutions in ways that cost taxpayers money.
Sylvia Manning does not pretend to have all the
answers to all of the issues, and she took her share of guff when caught in
the vise between the two competing pressures. But as the former head of the
nation's largest regional accrediting body, Manning believes she has a
possible answer to one of the dilemmas: how to get new
degree-granting institutions off the ground without undermining the
accreditors' traditional "gatekeeping" role.
In
a paper published last week by the American
Enterprise Institute, Manning begins (in ways that some critics might find
predictable) by challenging assertions that accreditation, in and of itself,
is a barrier to innovation.
Yes, Manning writes, accreditors depend heavily on
"inputs" (credentials of the faculty, services provided to students, etc.)
as proxies to judge whether an institution is likely to "continue to offer
an acceptable level of quality in the education it provides."
But ultimately, an accrediting agency can't
accurately assess an institution based only on its plans, she argues.
"Accreditation demands evidence, and evidence must be based in
accomplishment, not plans," she writes. Since the evidence revolves around
how students perform and "what the institution does with students," the
evidence can be developed only after students are enrolled.
So yes, she concedes, "the barrier to innovative
new institutions is accreditation." But that is not, she quickly adds,
"because accreditation cannot deal with innovation, but because it wants and
needs time to assess innovation, if the innovation is actually new." But the
institution needs accreditation -- or at least one of the key benefits to
accreditation -- the ability to enroll students who receive federal
financial aid -- right away.
That creates what Manning calls the "chicken or egg
problem": fledgling degree-granting institutions needing accreditation so
they can enroll students with federal funding, and accreditors not wanting
to approve institutions until they've enrolled students and proven their
performance with them.
What Happens Now
Most of the ways that accreditors and institutions
have worked around this problem in recent years have, in one way or another,
"perverted" the process, Manning said in an interview.
Throughout much of the decade of the 2000s,
entrepreneurs purchased already-accredited institutions and essentially
turned them into a different institution altogether. The Higher Learning
Commission was at the forefront of such an approach before Manning became
its president, and under her the accreditor largely shut off that pathway.
(That didn't stop her from getting
raked over the coals at a 2011 Senate hearing that
focused on the exploits of the poster child for that type of transformation,
Bridgepoint Education's 2005 purchase of a struggling Iowa college that
became Ashford University.)
More recently, those trying to create new
institutions have turned to what Manning calls "accreditation by
association," in which an existing institution teams up with a new entity
(often a for-profit company) to create a joint venture. Manning and the
Higher Learning Commission were in the middle of that trend, too, with the
much-contested 2013 implosion of Tiffin
University's partnership with Altius Education, known as Ivy Bridge College.
(Supporters of Ivy Bridge
criticized her and AEI for the limitations of
Manning's proposal and for failing, they said, to fully acknowledge her role
in its demise.)
Essentially, Manning argues, there have not yet
been good ways for the accreditation system to "handle these kinds of [new]
institutions while remaining true to itself."
That disconnect has many policy makers calling for
major changes in how accreditation works, although those discussions have
largely revealed how little agreement there seems to be on what those
changes might be. Manning is skeptical that shortening the time before an
institution is accredited, as some have suggested, would work: "[I]t is not
possible to both preserve the time test of accreditation and hurry up
accreditation for new institutions. To drop the time test would be to drop
the elements of an accreditation review that add up to some sort of proof,"
she writes.
Her alternative is creating something else
entirely: a provisional approval to award federal financial aid that would
act something like a building permit in facilities construction. This
process would involve close study of the would-be new institution's plans
(with, yes, a focus on "inputs"), and then once a prospective institution is
given permission to recruit students who are eligible for federal aid,
annual reviews (not unlike inspections for construction of a new building)
to keep that approval. The institution would then need to earn regular
accreditation within a specific period of time, say seven years. Students
who chose to attend these institutions in the meantime -- and the federal
government, to the extent it backed them with financial aid -- would still
take on risk, since the students' credits might not transfer.
Some key elements of Manning's vision remain less
than fully sketched. She offers several possibilities, for instance, for who
might grant this provisional approval -- the Education Department,
recognized accreditors, or new nongovernmental agencies.
And she acknowledges the problems that her solution
does not deal with at all, most notably whether and how the federal
government might recognize the growing number of institutions that do not
have any intention of granting degrees. (The Council for Higher Education
Accreditation and the Presidents' Forum
released a paper last month exploring potential
ways to ensure the quality of "non-institutional" providers of higher
education.)
Continued in article
Jensen Comment
The issues of innovation and elitism versus accreditation has been even more
controversial in the AACSB International that accredits business schools
worldwide ---
http://www.aacsb.edu/
First came the reluctance/stubbornness of the AACSB to accredit graduate
programs in some large corporations and elite consulting firms. These were often
intended to be advanced-degree programs of employees, often extremely talented
employees. To date I don't think any of these corporate business education
programs have received the AACSB seal of approval in North America, thereby
forcing firms like PwC and EY to partner with AACSB-accredited universities like
Notre Dame, the University of Virginia, and the University of Georgia where the
universities set up dedicated courses and degree programs for employees of the
firms. Debates still rage over whether this is a quality issue or merely
protectionism by deans of non-profit universities who virtually control the
AACSB. There now are "universities" such as Deloitte University, but these are
not accredited by the AACSB and are mainly for advanced technical and leadership
training.
Second came the reluctance/stubbornness of the AACSB to accredit business
schools in for-profit universities like the massive University of Phoenix.
To date I don't think the AACSB has accredited any business program in a
for-profit universities in North America. Here the issue is more of a quality
concern. For example, for-profit universities, even those with academic respect,
tend to have virtually no admission standards.
Third came the reluctance/stubbornness of the AACSB to accredit stand-alone
distance education programs. To date there are many AACSB-accredited distance
education programs in North America, but all are part of traditional onsite
business education programs that had prior AACSB accreditation.
Recently the AACSB was about to be put to a test that is common in regional
accreditation programs. To obtain regional accreditation for-profit universities
commonly purchased marginal, often bankrupt, colleges that still had their
regional accreditations. Thereby the for-profit universities essentially bought
their regional accreditations. This same ploy almost happened recently with the
financially struggling Thunderbird School of Global Management, a nonprofit
university with AACSB Accreditation. A deal was nearly completed for the
international for-profit
Laureate International Universities to purchase Thunderbird in a complicated
leaseback agreement ---
http://en.wikipedia.org/wiki/Thunderbird_School_of_Global_Management
I'm not certain how the AACSB would have handled the Thunderbird leaseback
deal, but a horrific fight between Thunderbird and its alumni put an end to the
deal before it was consummated.
One thing is certain. The issues of innovation and quality are not going away
in the arena of accreditation. In an effort to obtain a foothold in Europe the
AACSB made some concessions to corporate universities that it probably would not
yet make in North America. For example, some AACSB-accredited corporate programs
probably would not meet AACSB standards in North America. For example, in Europe
it is common to have doctoral programs that do not have the research rigor and
admission standards of North American business school doctoral programs.
Dangers in Relying Upon Regional Academic Accrediting Agencies
Standards for measuring credit hours and program length, and affirmed its
earlier critique that the commission had been too lax in its standards for
determining the amount of credit a student receives for course work.
Top Accounting Undergraduate Programs Ranked by US News (most now have
masters in accounting programs as well)---
http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/business-accounting
AACSB-accredited programs that also have specialized accounting
accreditations as well ---
http://www.aacsb.edu/en/accreditation/accounting/
Top Accounting MBA in Accounting Specialty Programs Ranked by US News
http://grad-schools.usnews.rankingsandreviews.com/best-graduate-schools/top-business-schools/accounting-rankings
If we were to just rank the
accounting doctoral programs in terms of research performance the
rankings might be quite different from the rankings shown above for MBA
specialty and Master of Accounting Programs ---
http://www.byuaccounting.net/rankings/univrank/rankings.php
US News Best Undergraduate Business Programs ---
http://colleges.usnews.rankingsandreviews.com/best-colleges/rankings/business-accounting
Many of these top programs are much more affordable than those chosen by The
Accounting Degree Review.
Guide to Online Community Colleges ---
http://www.affordablecollegesonline.org/online-colleges/community-colleges/
Jensen Comment
Online community college courses are good for things like training certificates
and associate degrees. However, for students wanting four-year and graduate
online courses, there are usually better alternatives such as the ones listed
below.
From US News in 2014
Best Online Degree Programs (ranked) ---
http://www.usnews.com/education/online-education
Best Online Undergraduate Bachelors Degrees ---
http://www.usnews.com/education/online-education/bachelors/rankings
Central Michigan is the big winner
Best Online Graduate Business MBA Programs
---
http://www.usnews.com/education/online-education/mba/rankings
Indiana University is the big winner
Best Online Graduate Education Programs ---
http://www.usnews.com/education/online-education/education/rankings
Northern Illinois is the big winner
Best Online Graduate Engineering Programs
---
http://www.usnews.com/education/online-education/engineering/rankings
Columbia University is the big winner
Best Online Graduate Information Technology
Programs ---
http://www.usnews.com/education/online-education/computer-information-technology/rankings
The University of Southern California is the big winner
Best Online Graduate Nursing Programs ---
http://www.usnews.com/education/online-education/nursing/rankings
St. Xavier University is the big winner
US News Degree Finder ---
http://www.usnews.com/education/online-education/features/multistep-oe?s_cid=54089
This beats those self-serving for-profit university biased Degree Finders
US News has tried for years to rank for-profit universities, but they
don't seem to want to provide the data.
Question
What accredited law schools offer online tax LL.M. degrees?
Answer (these degrees typically take three years to complete for full-time
students unless students already have law degrees)
http://taxprof.typepad.com/taxprof_blog/2014/09/nine-law-schools.html
Selected Online Masters of Accounting and Masters of Taxation Programs ---
http://faculty.trinity.edu/rjensen/CrossBorder.htm#MastersOfAccounting
Time between enrollment and graduation depends a great deal on meeting
prerequisite requirements in accountancy, and business core (including economics
and ethics). I'm biased in recommending such degrees from only AACSB-accredited
business programs, although not necessarily AACSB-accredited accounting
programs. Some of the most prestigious AACSB-accredited universities do not have
the added accountancy specialized accreditation.
"Why One Accreditor Deserves Some Credit. Really," by Kevin Carey,
Chronicle of Higher Education, July 30, 2012 ---
http://chronicle.com/article/Why-One-Accreditor-Deserves/133179/?cid=cr&utm_source=cr&utm_medium=en
It's hard to be in the accreditation business these
days. The original regional accreditors were founded a long time ago, in a
different world. The first associations, set up on the East Coast in the
late 1800s, were basically clubs with membership criteria that limited
entrance to institutions fitting the classic collegiate mold.
That voluntary, peer-based approach made sense in
an era when higher education was a smaller and more private affair. But when
America embarked on its great mid-20th-century expansion to mass (and
increasingly, federally financed) higher education, small nonprofit
accreditors with no formal governmental authority were given the keys to the
federal financial-aid kingdom and asked to protect the interests of students
and taxpayers alike. It is a job they weren't built for, and they are
increasingly feeling the strain.
When for-profit higher-education corporations
hoover up hundreds of millions of dollars in financial aid while granting
degrees of questionable value, their accreditors get blamed. When studies
like Richard Arum and Josipa Roksa's Academically Adrift call the extent of
college-student learning into question, accreditors are denounced for not
enforcing academic standards. When some public institutions post graduation
rates in the midteens, year after year, accreditors are charged with
abetting failure.
Too often, accreditors react to criticism with a
defensive crouch. So it's been gratifying to watch one regional accreditor,
the Western Association of Schools and Colleges, or WASC, take a different
approach in recent weeks, setting an example for others to follow.
WASC oversees higher education in California,
Hawaii, and the Pacific islands. In early July it rejected an application
from the high-flying publicly traded company Bridgepoint Education. Although
Bridgepoint's corporate headquarters are in a downtown San Diego office
tower, the anchor of its fast-growing online operation, Ashford University,
is in Clinton, Iowa, at the former home of Franciscan University of the
Prairies.
In 2005 Bridgepoint bought Franciscan, which at the
time was declining but still accredited. Franciscan was promptly renamed
Ashford.
Seven years, more than 200,000 students, vast sums
of taxpayer-supported financial aid, and several Congressional hearings
later, Bridgepoint had apparently worn out its welcome with Franciscan's
former accreditor, and decided to look for approval closer to its corporate
home. But WASC turned it down, for reasons that included a paucity of
faculty at Ashford and the fact that 128,000 out of 240,000 students had
dropped out over the last five years. "That level of attrition," said WASC's
president, Ralph A. Wolff, "is, on its face, not acceptable."
WASC did something else that day which received
much less publicity but was, in the long run, probably more important: It
posted its rejection letter to Bridgepoint on the Internet for the world to
see.
Accreditors have historically been a secretive lot,
keeping all the bad news within the insular higher-education family. That's
a defensible approach for a private-membership club. But when organizations
serve as de facto agents of public accountability, their methods and
decisions must be publicly transparent. The other five regional accreditors
should immediately follow WASC's lead.
WASC isn't reflexively opposed to for-profit
colleges. Even as it turned down Bridgepoint, the accreditor approved
for-profit UniversityNow's purchase of struggling nonprofit Patten
University, in Oakland, Calif. Unlike Bridgepoint, UniversityNow has a
low-cost tuition model and doesn't accept federal financial aid.
Additionally, the Accrediting Commission for
Community and Junior Colleges, which is operated by WASC, recently warned
the City College of San Francisco that it may lose its accreditation because
of chronic mismanagement—a step that accreditors are usually loath to take
with public institutions.
. . .
Peer review is also vulnerable to logrolling and
the mutual acceptance of failure. Many public and nonprofit institutions
have attrition rates worse than those at Bridgepoint. Those figures, too,
are unacceptable.
But WASC has taken bold steps to make accreditation
relevant and effective in a rapidly changing higher-education world. For
this, it deserves applause and support. Accreditation may have begun on the
East Coast, but it is the westernmost accreditor that has set a new standard
that all others should follow.
"New Business-School (AACSB) Accreditation Is Likely to Be More
Flexible, Less Prescriptive," by Katherine Mangan, Chronicle of Higher
Education, February 2012 ---
http://chronicle.com/article/New-Business-School/130718/
New accreditation standards for business schools
should be flexible enough to encourage their widely divergent missions
without diluting the value of the brand that hundreds of business schools
worldwide count among their biggest selling points.
That message was delivered to about 500 business
deans from 38 countries at a meeting here this week.
The deans represented the largest and most
geographically diverse gathering of business-school leaders to attend the
annual deans' meeting of AACSB International: the Association to Advance
Collegiate Schools of Business.
The association is reviewing its accreditation
standards, in part to deal with the exponential growth in the number of
business schools overseas, many of which are seeking AACSB accreditation.
The committee that is drawing up proposed new
standards gave the deans a glimpse at the changes under consideration, which
are likely to acknowledge the importance of issues like sustainable
development, ethics, and globalization in today's business schools. A
council made up of representatives of the accredited schools will have to
approve the changes for them to take effect, and that vote is tentatively
scheduled for April 2013.
Joseph A. DiAngelo, the association's chair-elect
and a member of the committee reviewing the standards, said that when the
rules are too prescriptive, schools' mission statements, which drive their
curricula and hiring patterns, all start to look the same.
"It's all vanilla. I want to see the nuts and the
cherries and all the things that make your school unique," said Mr. DiAngelo,
who is also dean of the Erivan K. Haub School of Business at Saint Joseph's
University, in Philadelphia.
The last time the standards were revised, in 2003,
schools were put on notice that they would have to measure how much students
were learning—a task some tackled with gusto. One business school Mr.
DiAngelo met with on a recent accreditation visit "had 179 goals and
objectives, and they only have 450 students," he said. "I said, You can't be
serious."
The committee's challenges include providing a more
flexible accreditation framework to allow schools to customize their
approaches without angering members that have already sweated out the more
rigorous and prescriptive process.
And even though many schools outside the United
States have trouble meeting the criteria for accreditation, especially when
it comes to having enough professors with Ph.D.'s, "We don't think it's
appropriate to have dual standards for schools in the U.S. and those outside
the U.S.," said Richard E. Sorensen, co-chair of the accreditation-review
committee and dean of the Pamplin College of Business at Virginia Tech.
Continued in article
Jensen Comment
In the 1970s when I guided the University of Maine at Orono to AACSB
accreditation the standards were relatively fixed for all business schools that
got accredited. By the 1990s when I participated (but did not lead) the AACSB
accreditation effort of Trinity University, the accreditation standards had
changed significantly. The relevant accreditation standards became menu driven.
Getting accreditation entailed choosing missions from the menu. In other words
attaining accreditation became mission driven. Whereas an R1 university's main
mission might be having a leading research reputation and a doctoral program, a
non-R1 university might have more focus on other missions such as teaching
reputation or innovative programs for minority student admissions.
There were and still are limits set on mission-driven AACSB accreditation
standards. For example, to my knowledge no program that has more online students
than onsite students to my knowledge as ever attained AACSB accreditation.
However, universities having prestigious online business and accounting programs
like the University of Connecticut can have online degree programs provided
their main missions are to serve onsite students. No North American for-profit
business program to my knowledge has ever been accredited, including some
prestigious MBA programs initiated by leading consulting firms. Outside North
America, however, the AACSB does seem to have a bit more flexibility in terms of
a for-profit mission.
In North America, the AACSB seems to fear opening Pandora's box to for-profit
universities. At the same time, I do not know of any for-profit university that
currently has admission standards and academic standards that I personally would
consider a great candidate for AACSB accreditation. This, of course, does not
mean that some questionable non-profit universities that somehow achieved AACSB
accreditation have stellar admission and academic standards. Maybe I'm a
snob, but I think the AACSB took this mission-driven thing a bridge too far.
The renewed effort to provide even more flexible standards may cheapen the
currency even more.
Sigh! Maybe I really am an old snob!
Bob Jensen's threads about accreditation are at
http://faculty.trinity.edu/rjensen/Assess.htm#AccreditationIssues
"Fantasy Academe: a Role for Sabermetrics Fantasy Academe: a Role for
Sabermetrics 1," by Robert Zaretsky, Chronicle of Higher Education,
December 17, 2012 ---
http://chronicle.com/article/Fantasy-Academe-a-Role-for/136325/?cid=at&utm_source=at&utm_medium=en
Jensen Comment
The above article was triggered by an unfavorable accreditation review at the
University of Houston. Interestingly, before the 1990s the AACSB accreditation
standards were filled with bright lines that were essentially "sabermetrics,"
such as student/faculty ratio thresholds and the minimum proportion of
terminally qualified faculty in each department, with "terminally qualified"
defined as not being doctoral faculty with degrees outside the field of
business such as non-qualifying doctoral degrees in education, economics,
mathematics, statistics, history, etc.
Then, for complicated reasons and excuses, the AACSB moved toward eliminating
bright line sabermetrics with squishy standards rooted in mission-driven
criteria. AACSB mission-driven accreditation standards are analogous to
principles-based accounting standards. Now business administration departments
may define "terminally qualified" in terms of the unique missions of the college
of business.
I might add that top university officials hate bright line, rules-based
accreditation standards. In the old days some astute college presidents (I know
one personally) absolutely refused to allow a college of business to seek
AACSB accreditation. This is because when the number of business major credit
hours soar relative to humanities and science, business deans would blackmail
the college president for increased budgets on the basis that the falling behind
the bright lines of the AACSB would result in losing accreditation. Losing
accreditation is much more serious than not having had such accreditation in the
first place. It's a bit like getting a divorce versus not ever having been
married in the first place. Divorces can be expensive. As Jerry Reed sang, "she
got the gold mine and I got the shaft."
http://www.youtube.com/watch?v=U-p0zn3PijY
Mission-based AACSB standards are a bit more like bypassing rules-based
marriage laws with squishy standards where the business school in College A has
a much different faculty-student profile than business school B. My college
president friend mentioned above readily funded our quest for AACSB
accreditation when the AACSB restated its standard setting to be mission-based.
This meant that this president couldn't be blackmailed out of using his own
discretion in setting budgets for all departments on campus.
What should be the role of sabermetrics in accreditation?
Bob Jensen's threads on higher education controversies ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Unreliability of Higher Education's Accrediting Agencies
"Mend It, Don't End It," by Doug Lederman, Inside Higher Ed, February 4,
2011 ---
http://www.insidehighered.com/news/2011/02/04/education_department_panel_hears_ideas_about_improving_higher_education_accreditation
About two-thirds of the way through the first day
of the Education Department's
two-day forum on
higher education accreditation, something strange happened: a new idea
emerged.
Not that the conversation that preceded it was
lacking in quality and thoughtfulness. The discussion about higher
education's system of quality assurance included some of the sharper minds
and best analysts around, and it unfolded at a level that was quite a bit
higher than you'd find at, say, the typical Congressional hearing.
The discussion was designed to help the members of
the Education Department's National Advisory Committee on Institutional
Quality and Integrity understand the accreditation system, so it included a
wide range of voices talking about many aspects of quality, regulation and
oversight in higher education. The exchanges served largely to revisit
history and frame the issues in a way that probably seemed familiar, at
least to those who follow accreditation closely.
The basic gist on which there was general
agreement:
- Higher education accreditation is imperfect
(seriously so, in the eyes of some), with many commentators citing how
rarely the agencies punish colleges and how inscrutable and mysterious
their process is to the public.
- Politicians and regulators are asking
accrediting agencies to do things they were never intended to do, like
make sure colleges don't defraud students.
- Despite those flaws, most seemed less than
eager to try to create a wholly different system to assure the quality
of America's colleges and universities, because they see it as either
difficult or undesirable.
Yet given Education Secretary Arne Duncan's
formal charge to the newly reconstituted panel,
which was distributed at its
first formal meeting in December, most of the
higher education and accreditation officials who attended the policy forum
said they had little doubt that the panel is strongly inclined to recommend
significant changes, rather than just ruminating about how well the system
is working.
Continued in article
Jensen Comment
On of the biggest abuses is the way for-profit universities buy out failing
non-profit colleges for the main purpose of gaining accreditation by buying it
rather than earning it. The scandal is that the accrediting agencies,
especially the North Central accrediting agency, let for-profits simply buy
this respectability. For-profit universities can be anywhere and still buy a
North Central Association accreditation.
I do not know of any successful attempt of a for*profit university to buy out
a failing university that has AACSB accreditation.
Most, but certainly not all, colleges in danger of losing regional
accreditation are for-profit colleges
Here's an illustration of some not-for-profit colleges that are also in trouble
You've really got to be in trouble before regional accreditors sound alarms,
especially in terms of admission and grading standards
"Middle States Ends Accreditation for 1 College, Issues Probation to 4 and
Warnings to 9," Inside Higher Ed, July 1, 2011 ---
http://www.insidehighered.com/news/2011/07/01/qt#263879
"Inspector General Keeps the Pressure on a Regional Accreditor," by Eric
Kelderman, Chronicle of Higher Education, May 27, 2010 ---
http://chronicle.com/article/Inspector-General-Keeps-the/65691/?sid=at&utm_source=at&utm_medium=en
The inspector general of the U.S. Department of
Education has reaffirmed a recommendation that the department should
consider sanctions for the Higher Learning Commission of the North Central
Association of Colleges and Schools, one of the nation's major regional
accrediting organizations. In a
report this week, the Office of Inspector General
issued its final recommendations stemming from a
2009 examination of the commission's standards for
measuring credit hours and program length, and affirmed its earlier critique
that the commission had been too lax in its standards for determining the
amount of credit a student receives for course work.
The Higher Learning Commission accredits more than
1,000 institutions in 19 states. The Office of Inspector General completed
similar reports for two other regional accreditors late last year but did
not suggest any sanctions for those organizations.
Possible sanctions against an accreditor include
limiting, suspending, or terminating its recognition by the secretary of
education as a reliable authority for determining the quality of education
at the institutions it accredits. Colleges need accreditation from a
federally recognized agency in order to be eligible to participate in the
federal student-aid programs.
In its examination of the Higher Learning
Commission, the office looked at the commission's reaccreditation of six
member institutions: Baker College, DePaul University, Kaplan University,
Ohio State University, the University of Minnesota-Twin Cities, and the
University of Phoenix. The office chose those institutions—two public, two
private, and two proprietary institutions—as those that received the highest
amounts of federal funds under Title IV, the section of the Higher Education
Act that governs the federal student-aid programs.
It also reviewed the accreditation status of
American InterContinental University and the Art Institute of Colorado, two
institutions that had sought initial accreditation from the commission
during the period the office studied.
The review found that the Higher Learning
Commission "does not have an established definition of a credit hour or
minimum requirements for program length and the assignment of credit hours,"
the report says. "The lack of a credit-hour definition and minimum
requirements could result in inflated credit hours, the improper designation
of full-time student status, and the over-awarding of Title IV funds," the
office concluded in its letter to the commission's president, Sylvia
Manning.
More important, the office reported that the
commission had allowed American InterContinental University to become
accredited in 2009 despite having an "egregious" credit policy.
In a letter responding to the commission, Ms.
Manning wrote that the inspector general had ignored the limitations the
accreditor had placed on American InterContinental to ensure that the
institution improved its standards, an effort that had achieved the intended
results, she said. "These restrictions were intended to force change at the
institution and force it quickly."
Continued in article
Jensen Comment
The most successful for-profit universities advertise heavily about credibility
due to being "regionally accredited." In some cases this accreditation was
initially bought rather than achieved such as by buying up a small, albeit still
accredited, bankrupt not-for-profit private college that's washed up on the
beach. This begs the question about how some for-profit universities maintain
the spirit of accreditation acquired in this manner.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Finally, At Long Last, Why did it take so long?
"Standing Up to 'Accreditation Shopping'," by Scott Jaschik, Inside
Higher Ed, July 1, 2010 ---
http://www.insidehighered.com/news/2010/07/01/hlc
Critics of for-profit higher education have of late
drawn attention to what they see as a pattern of "accreditation shopping" in
which for-profit entities purchase financially struggling nonprofit
colleges, and then hold on to the regional accreditation that the nonprofit
colleges had for years, even as the new owners expand or radically change
the institutions' missions.
One accreditor is saying "not so fast." The Higher
Learning Commission of the North Central Association of Colleges and Schools
has recently rejected two "change of control" requests to have accreditation
continue with the purchases of nonprofit colleges (Dana College, in
Nebraska, and Rochester College, in Michigan) by for-profit entities.
Further, the accreditor insisted on a series of stipulations to approve the
continued accreditation of Iowa's Waldorf College -- stipulations that will
effectively keep the near-term focus of the college on its residential,
liberal arts mission.
The rejection of the
accreditation continuation for Dana led the college's board
to announce
Wednesday that its purchasers no longer consider the
deal viable. As a result, the sale will not take place and the college,
founded in 1884, will shut down. There will be no operations for the 2010-11
academic year.
The decisions by the
Higher Learning Commission (HLC) have been based on a new set of policies
the accreditor approved that require that the mission remain similar after a
purchase if the new owner wants the accreditation to carry over. A new owner
who wants to change an institution's mission still has the right to apply as
a candidate for initial accreditation, but that process takes longer and is
one that many purchasers of colleges want to avoid.
Sylvia Manning, president
of the HLC, said that the new policy was designed to prevent the use of a
struggling college's accreditation to launch entirely new institutions.
"This practice that has been called 'accreditation shopping' -- that's
something we are very much opposed to. Accreditation is not like a liquor
license."
The HLC does not release
details on its decisions, although it announces them in general terms and
plans to announce its decision on Dana today. A letter delivered to the
college Wednesday was leaked to
The Lincoln Journal Star. Manning declined to
confirm the details in the letter that were quoted by the newspaper, but
other sources verified its authenticity.
Dana, a Lutheran liberal
arts institution, announced in March that it was
being purchased by a new for-profit company. The
new owners at the time said that they were going to be focused on building
up the college in its present form -- and that they were committed to
keeping the college's tenure system, an unusual move in for-profit higher
ed.
The HLC letter, as
described in the Lincoln newspaper, suggested that the investors had in mind
a much more dramatic shift in Dana's mission than they indicated at the time
the purchase was announced. According to the Lincoln newspaper, the HLC
rejected the idea of maintaining accreditation because of "an inability to
demonstrate sufficient continuity of the college's mission and educational
programs," in part due to an interest in offering online programs that would
represent a shift from the college's "residential liberal arts programs."
Continued in article
The American Public University System is accredited by the North Central
Association accrediting agency that is now under investigation for weakened
standards for college credits.
"Inspector General Keeps the Pressure on a Regional Accreditor," by Eric
Kelderman, Chronicle of Higher Education, May 27, 2010 ---
http://chronicle.com/article/Inspector-General-Keeps-the/65691/?sid=at&utm_source=at&utm_medium=en
The inspector general of the U.S. Department of Education has reaffirmed a
recommendation that the department should consider sanctions for the Higher
Learning Commission of the North Central Association of Colleges and Schools,
one of the nation's major regional accrediting organizations. In a
report this week,
the Office of Inspector General issued its final recommendations stemming from a
2009 examination
of the commission's standards for measuring credit hours and program length, and
affirmed its earlier critique that the commission had been too lax in its
standards for determining the amount of credit a student receives for course
work.
The Higher Learning Commission accredits more than 1,000 institutions in 19
states. The Office of Inspector General completed similar reports for two other
regional accreditors late last year but did not suggest any sanctions for those
organizations.
Possible sanctions against an accreditor include limiting, suspending, or
terminating its recognition by the secretary of education as a reliable
authority for determining the quality of education at the institutions it
accredits. Colleges need accreditation from a federally recognized agency in
order to be eligible to participate in the federal student-aid programs.
In its examination of the Higher Learning Commission, the office looked at the
commission's reaccreditation of six member institutions: Baker College, DePaul
University, Kaplan University, Ohio State University, the University of
Minnesota-Twin Cities, and the University of Phoenix. The office chose those
institutions—two public, two private, and two proprietary institutions—as those
that received the highest amounts of federal funds under Title IV, the section
of the Higher Education Act that governs the federal student-aid programs.
It also reviewed the accreditation status of American InterContinental
University and the Art Institute of Colorado, two institutions that had sought
initial accreditation from the commission during the period the office studied.
The review found that the Higher Learning Commission "does not have an
established definition of a credit hour or minimum requirements for program
length and the assignment of credit hours," the report says. "The lack of a
credit-hour definition and minimum requirements could result in inflated credit
hours, the improper designation of full-time student status, and the
over-awarding of Title IV funds," the office concluded in its letter to the
commission's president, Sylvia Manning.
More important, the office reported that the commission had allowed American
InterContinental University to become accredited in 2009 despite having an
"egregious" credit policy.
In a letter responding to the commission, Ms. Manning wrote that the inspector
general had ignored the limitations the accreditor had placed on American
InterContinental to ensure that the institution improved its standards, an
effort that had achieved the intended results, she said. "These restrictions
were intended to force change at the institution and force it quickly."
Continued
in article
Bob Jensen's threads on for-profit colleges and universities ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud
Before reading the tidbit below, I remind you that
specialized business accreditation of colleges by the
AACSB,
IACAB, or some other
accrediting body costs a lot of money initially and every year thereafter for
maintaining accreditation.
If colleges do not have specialized accreditation in a
given discipline, they should especially think twice before seeking specialized
accreditation. It's a little like getting a boyfriend, girlfriend, or spouse.
Getting one is relatively easy, but getting rid of one can be costly and highly
traumatic. It may not be quite as costly to voluntarily drop accreditation, but
all hell breaks loose if the accrediting body puts a college on probation or
suspension of accreditation. The publicity of lost
accreditation can be far more devastating than the loss of accreditation itself.
Specialized accreditation by prestigious schools has always
been somewhat a waste of money except for public relations purposes among other
business schools. For purposes of student recruiting and faculty hiring, who
cares about AACSB accreditation at Harvard, Stanford, Chicago, Cornell, USC, the
University of Texas, or the University of Illinois? In really tough financial
times, these universities could easily save money by dropping accreditation, but
their budgets are probably not so miserable as to consider dropping
accreditation.
Specialized accreditation in a given discipline typically
matters more to lesser-known, especially regional, colleges that have a more
difficult time recruiting highly talented students and faculty. Sadly, these are
often the schools that can least afford the cost of maintaining accreditation.
Saving money by dropping accreditation becomes a much tougher decision if
accreditation is deemed to matter in recruitment of students and faculty.
"Struggling Colleges Question the Cost—and Worth—of
Specialized Accreditation," by Eric Kelderman, Chronicle of Higher
Education, October 5, 2009 ---
Click Here
In thinking about selecting a new dean for
its business school this year, Southern New Hampshire University considered
whether the new leader should guide the school to gain accreditation through
the Association to Advance Collegiate Schools of Business, as more than 500
colleges have done.
But after seeing estimates that the costs
of meeting those standards could top $2-million annually, Paul J. LeBlanc,
president of the university, decided that approval from the business-college
association wasn't worth the institution's time or money.
While accreditation from a federally
recognized organization is required for an institution's students to receive
federal financial aid, colleges have often sought additional specialized
accreditation to meet professional-licensing standards or to bolster their
reputations.
But in the uncertain economic climate,
some institutions are struggling with whether they can maintain the levels
of financial support or the numbers of highly qualified faculty members
needed for the associations' stamps of approval. And some campus leaders are
deciding that the costs of such endorsements outweigh the benefits.
An Expensive Business The price of
becoming accredited includes annual dues and the expenses of peer reviewers
who visit the campus every few years. Annual membership fees for
business-school accreditation range from $2,500 to $7,300, and one-time fees
for initial accreditation are as much as $18,500.
But a much greater cost usually comes with
having to meet an association's standards for hiring a sufficient number of
qualified faculty members. This has added to the intense competition for
professors in fields such as pharmacology, nursing, and business, where
instructors are scarce because jobs outside academe do not usually require a
terminal degree, and teaching at a university might mean a big pay cut.
Rather than compete with the nation's best
business colleges for a limited number of people with doctoral degrees, Mr.
LeBlanc said his institution would be better off creating business-degree
programs with practical applications, in areas like supply-chain management.
Seeking accreditation would also have tied up the new dean with duties other
than running the school, he said.
Jerry E. Trapnell, executive vice
president and chief accreditation officer at the Association to Advance
Collegiate Schools of Business, says that so far, the economic downturn has
not led to an unusually high number of colleges dropping out of the
accreditation process. But the long-term effect of the downturn is hard to
predict, he said.
Other business-school leaders say the
costs of accreditation from the business-college association are a problem
not just because of the economy. The cost, some experts argue, has "stunted
the growth" of continuing-education programs that typically attract
nontraditional students who may not have the time or money to pursue a
college degree full time.
Business and management courses are
indispensable for continuing-education programs, said Jay A. Halfond, dean
of Metropolitan College at Boston University, and Thomas E. Moore, dean of
the College of Business Administration at Northeastern University, in an
article they wrote this year in the journal Continuing Higher Education
Review. But to meet the accreditation standards, undergraduate programs that
have more than a quarter of their courses in business and graduate programs
with at least half of their courses in that field must be taught primarily
by "full-time, conventional faculty, with advanced research credentials and
an active record of ongoing scholarship," the authors wrote.
To keep continuing-education programs
affordable for part-time students, some colleges have sidestepped the
standards by using "euphemistic" names for their programs, the article said,
or by making sure that the proportion of business courses is just under the
accreditor's threshold.
Mr. Halfond doesn't think business-school
accreditors are "the evil empire," he said in an interview. "But it can be
very painful for some institutions to reach their standards, and they're not
very forgiving."
A Mark of Credibility Officials at Georgia
Southwestern State University, however, say the business school's
accreditation has improved the reputation of its program. John G. Kooti,
dean of the School of Business Administration there, said the goal of
accreditation inspired greater support from the university and attracted
better-qualified faculty members and more students. "We used accreditation
to build a program," he said. "It brought us credibility."
Georgia Southwestern, which earned
accreditation from the business-college association this spring, doubled the
amount of the business school's budget over the past five years to meet the
accreditor's standards, Mr. Kooti said. The school has also increased the
size of its faculty to 19 from 11. And Mr. Kooti anticipates hiring two more
faculty members next year to keep up with enrollment, which has grown 20
percent over the past two years.
Georgia Southwestern has also spent nearly
$500,000 to renovate the space that the business school uses, Mr. Kooti
said. Feng Xu, an assistant professor of management, said potential faculty
members look more favorably on job offers from accredited business colleges.
Even institutions without that accreditation look for instructors who have
degrees from accredited colleges, he said.
International students are also concerned
about accreditation because they may have little other information about the
quality of an institution before coming to the United States, said Mr. Xu, a
native of China who earned graduate degrees at South Dakota State University
and George Washington University, both of which are accredited by the
business-school association.
Eduardo J. Marti, president of
Queensborough Community College of the City University of New York, said
that the real value of accreditation accrued to students. "The only thing
our students leave the college with is a certificate of stock, a diploma,
which is worth only the reputation of the college," he said.
"I think a lot of presidents cry about the
cost of accreditation and the things they have to do to meet the standards,
when what they are really saying is they are concerned about someone coming
from outside and trying to run their programs," he said.
However, Mr. Halfond, of Boston
University's Metropolitan College, said that whether or not an institution
has earned a specialized accreditation is probably not a major concern of
most students and applicants. Because of that, he said, some colleges may
calculate that the cost of seeking and maintaining accreditation is far
greater than that of losing a few potential students.
In fact, Steven F. Soba, director of
undergraduate admissions at Southern New Hampshire, said that during his 17
years as an admissions officer he could think of only a couple of instances
where parents had inquired about any kind of accreditation.
Accreditors' Concerns As state budget cuts
and other drops in revenue take their toll on colleges, some accrediting
groups are trying to ease the financial burdens on institutions or at least
give them a chance to wait out the recession without being penalized.
Sharon J. Tanner, executive director of
the National League for Nursing Accrediting Commission, said that losing
existing or potential members is a concern for many accrediting bodies,
though they are unlikely to admit publicly that it is happening for fear of
damaging their reputations.
The nursing-accreditation group is still
benefiting from the booming demand for health-care workers, Ms. Tanner said.
Forty-one institutions entered the initial phase of nursing accreditation
during the past year. At the same time, however, a small number of colleges
have asked to delay campus visits by peer reviewers, she said, and several
other institutions have sought advice on how to remain accredited while
making cuts in their programs.
James G. Cibulka, president of the
National Council for Accreditation of Teacher Education, said many of his
member institutions accepted the association's offer to delay their
accreditation cycle by one year.
The council has also redesigned its
accrediting standards to focus more on how well education students perform
as teachers rather than on the specifics of the college's academic program.
In addition to improving teacher education, the new standards are expected
to be less costly for colleges, Mr. Cibulka said.
Cynthia A. Davenport, director of the
Association of Specialized & Professional Accreditors, said concerns about
the economy and its effect on the quality of academic programs were widely
discussed at a recent meeting of her association, which represents about 60
organizations that assess programs such as acupuncture, landscape
architecture, and veterinary medicine.
The poor economy, however, is no excuse to
let accreditation standards slip, she said. At a time when students are
flocking back to college to improve their job skills, the public needs to be
assured that colleges are providing quality education, she said.
If the college can't afford to hire the
same number of faculty members for an accredited program as they have in the
past, for instance, then they could reduce the enrollment in that area, she
said.
"Members know that some institutions may
be faced with difficult choices," she said, "but if they can't meet the
standards, then maybe they shouldn't be offering that program."
October 9, 2009 reply from Barbara Scofield
[barbarawscofield@GMAIL.COM]
Yet accreditation can't be ignored in accounting
education
NASBA's UAA Model at
http://www.nasba.org/862571B900737CED/F3458557E80CD8CA862575C3005DBD36/$file/UAA_Model_Rules_April24_2009.pdf
uses accreditation to differentiate the level of
reliance state boards place on business education at universities. Some
states (Texas) pride themselves on their adherence to NASBA, seeing it as a
"best practices" measure.
I'm interested in knowing if any of the states
represented by members of this list already have accreditation issues in
their state board of accountancy rules.
TSBPA adopted requirements for business
communications and accounting research this January for a future effective
date solely (in my opinion) to be able to say that they are following the
NASBA model. In the rules adopted in Texas, there can be no joint credit
towards CPA candidacy for a credit hour that provides both accounting
research and communication skills. So I have little faith in their actually
understanding the research process, despite the presence of academics on the
board.
I had a CPA, former chair of the Texas State Board
of Public Accountancy, board member (perhaps chair at that time) of NASBA
speak in my class, and he spoke plainly about the intent by both bodies (TSBPA
and NASBA) to dictate changes in accounting education without having a clue
that I might disagree with him.
Barbara W. Scofield, PhD, CPA
Chair of Graduate Business Studies and Professor of Accounting
The University of Texas of the Permian Basin
4901 E. University Dr. Odessa, TX 79762
432-552-2183 (Office)
BarbaraWScofield@gmail.com
The NASBA homepage is at
http://www.nasba.org/nasbaweb/NASBAWeb.nsf/WPHP?OpenForm
"How Colleges Are Buying Respect: For-profit education companies
are scooping up small schools to gain accreditation—and the financial aid
dollars that come with it," by Daniel Golden, Business Week, March 4,
2010 ---
http://www.businessweek.com/magazine/content/10_11/b4170050344129.htm?link_position=link4
TT Educational Services (ESI)
didn't pay $20.8 million for debt-ridden Daniel
Webster College in June just to acquire its red-brick campus, 1,200
students, or computer science and aviation training programs.
To ITT, the third-biggest higher-education company
in the U.S., the Nashua (N.H.) college's "most attractive" feature was its
regional accreditation, says Michael Goldstein, an attorney at Dow Lohnes, a
Washington firm that has long represented the Carmel (Ind.) company.
Regional accreditation, the same gold standard of academic quality enjoyed
by Harvard, is a way to increase enrollment and tap into the more than $100
billion the federal government pays out annually in financial aid.
The nation's for-profit higher-education companies
have tripled enrollment, to 1.4 million students, and revenue, to $26
billion, in the past decade, in part through the recruitment of low-income
students and active-duty military. Now they're taking a new tack. By
exploiting loopholes in government regulation and an accreditation system
that wasn't designed to evaluate for-profit takeovers, they're acquiring
struggling nonprofit and religious colleges—and their coveted accreditation.
Often their goal is to transform the schools into taxpayer-funded behemoths
by dramatically expanding enrollment with online-only programs; most of
those new students will receive federally backed financial aid, which is
only available at accredited colleges.
"The companies are buying accreditation," said
Kevin Kinser, an associate professor at the State University of New York at
Albany who studies for-profit higher education. "You can get accreditation a
lot of ways, but all of the others take time. They don't have time. They
want to boost enrollment 100% in two years."
By acquiring regional accreditation, trade schools
and online colleges gain a credential associated with traditional academia.
Six nonprofit regional associations set standards on financial stability,
governance, faculty, and academic programs. Normally the process takes five
years and requires evaluations by outside professors. Most for-profits have
been accredited by less prestigious national organizations. Students
enrolled at both regionally and nationally accredited colleges can receive
federal aid, but those at regionally accredited schools can transfer credits
more easily from one college to the next.
"CREATIVE ARRANGEMENTS"
For-profit education companies, including ITT and
Baltimore-based Laureate Education, have purchased at least 16 nonprofit
colleges with regional accreditation since 2004. The U.S. Education Dept.,
which doled out $129 billion in federal financial aid to students at
accredited postsecondary schools in the year ended Sept. 30, is examining
whether these kinds of acquisitions circumvent a federal law that requires a
two-year wait before new for-profit colleges can qualify for assistance,
says Deputy Education Under Secretary Robert Shireman. Under federal
regulations taking effect on July 1, accrediting bodies may also have to
notify the Education Secretary if enrollment at a college with online
courses increases more than 50% in one year. "It certainly has been a
challenge both for accreditors and the Department of Education to keep up
with the new creative arrangements that have been developing," Shireman
says.
Buying accreditation lets the new owners
immediately benefit from federal student aid, which provides more than 80%
of revenue for some for-profit colleges, instead of having to wait at least
two years. Traditional colleges are also more inclined to offer transfer
credits for courses taken at regionally approved institutions, making it
easier to attract students.
The regional accreditors, which rely on academic
volunteers, bestow the valuable credential with scant scrutiny of the
buyers' backgrounds, says Barmak Nassirian, associate executive director of
the American Association of Collegiate Registrars & Admissions Officers in
Washington.
March 6, 2010 reply from Jagdish Gangolly [gangolly@GMAIL.COM]
Bob,
I agree that losing accreditation can be a
disaster. But then again, how many institutions lose it? It is a black swan
event?
I abhor the thought of looking upon education as a
"business", but if we want accountability, we must recognise that there is a
business aspect to education. And it is here that some marriage of business
and education might help.
In businesses, normal attrition takes care of
efficiency and career advancement problems the same way that wars take care
of similar issues in the military. In the universities, on the other hand,
the tenure system prevents that from happening. That has two consequences:
1. It reduces mobility and promotes stagnation.
So, the only people who can and do move are the well-dressed beggars in
the blog I sent a bit earlier today.
2. The career path comes to a dead end once you
have reached the full (or chaired) slot. The result is that thew
organisation comes to resemble an inverted pyramid, obviously a
disequilibrium. Most universities solve this problem by creating fancy
titles and taking people out of the classrooms (how many Deans or vice
Presidents teach or are active in their fields?).
The businesses taking over smaller institutions
might bring better accountability and greater efficiencies.But I am not sure
it would maintain the standard of education or sustain freedom of inquiry
and academic freedom. Such universities might resemble Chinese factories
producing standardised low quality stuff at an attractive price.
Jagdish S. Gangolly
Department of Informatics College of Computing & Information State
University of New York at Albany Harriman Campus, Building 7A, Suite 220
Albany, NY 12222 Phone: 518-956-8251, Fax: 518-956-8247
March 6, 2010 reply from Bob Jensen
Hi Jagdish,
Anecdotally,
I know of quite a few colleges who were put on regional accreditation
probation. The only way they saved their accreditation was to manage to get
their finances and academic standards back on track. There are of course
some that went under.
One of the
best known cases recently was Florida A&M’s loss of accreditation. This
university has since turned itself around ---
http://www.usnews.com/blogs/paper-trail/2008/06/30/florida-am-regains-accreditation.html
Another famous case of a university that let academic standards slide was
Gallaudet University ---
http://www.washingtonpost.com/wp-dyn/content/article/2007/05/02/AR2007050202453.html
I think Gallaudet turned itself around.
There are
also some Colleges of Business that were put on AACSB probation. In most of
those cases the university had to take from Peter Humanities to pay Paul
Business.
This brings
up one point concerning strategy regarding accrediting a program within a
university. In truth, AACSB accreditation is very costly with only limited
benefits to universities that have solid reputations university-wide. For
example, who cares if the Harvard Business School has AACSB accreditation?
For that matter, who cares if the University of Maine is AACSB
accreditation.
When I was
at the University of Maine (UMO) I was the person assigned the duty of
getting AACSB accreditation for UMO. Doing so was the strategy of a very
smart Dean (for four decades) of the College of Business named Stan Devino
(one of my all-time best friends in my entire life). Somehow Stan convinced
the President of UMO that getting AACSB accreditation was a great idea.
But Stan’s
secret motive was to lever UMO for more resources. At the time UMO’s College
of Business was under fed in terms of numbers of tenured business faculty,
office space, salaries of business faculty, and scholarships for the MBA
program. We got some resources to gain the initial accreditation. But in
later years when UMO budgets fell under greater stress, the College of
Business was not cut back as much as other campus programs because losing
AACSB accreditation would be devastating for UMO. I suspect the President of
UMO rued the day he helped us become attain AACSB accreditation. The College
of Business even jumped to the top of the capital expenditure list for a
great new building.
Hence, the
threat of losing accreditation is a double-edged sword that can play to the
advantage of a cunning Dean. If I was the President of a reputed college I
would probably throw any dean out of my office who proposed a quest to get
program accreditation unless there were exceptional benefits from such
accreditation. If graduates of a program virtually cannot advance unless
their program has accreditation then this is an exceptional benefit. For
example, I think this is the case for nursing programs. It is not the case
for business programs in universities have great university-wide
reputations.
Bob Jensen
Online Distance Education is Rapidly Gaining Acceptance in Traditional as
Well as For-Profit Colleges ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
The Dark Side ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Online Training and Education Alternatives ---
http://faculty.trinity.edu/rjensen/crossborder.htm
Accreditation: Why We Must Change
Accreditation has been high on the agenda of the
Secretary of Education’s Commission on the Future of Higher Education —
and not in very flattering ways. In
“issue papers” and
in-person discussions, members of the commission
and others have offered many criticisms of current accreditation practice and
expressed little faith or trust in accreditation as a viable force for quality
for the future.
Judith S. Eaton, "Accreditation: Why We Must Change," Inside Higher Ed,
June 1, 2006 ---
http://www.insidehighered.com/views/2006/06/01/eaton
A Test of Leadership: Charting the Future of U.S. Higher Education
Charles Miller, chairman of the Secretary of
Education’s
Commission on the Future of Higher Education,
delivered
the final version of the panel’s report to the
secretary herself, Margaret Spellings, on Tuesday. The report, “A Test of
Leadership: Charting the Future of U.S. Higher Education,” is little changed
from the final draft that the commission’s members
approved by an 18 to 1 vote last month. Apart from
a
controversial change in language that softened the
panel’s support for open source software, the only other alterations were the
addition of charts and several “best practices” case studies, which examine the
California State University system’s
campaign to reach out to underserved students in
their communities, the
National Center for Academic Transformation’s efforts
to improve the efficiency of teaching and learning, and
the innovative curriculum at Neumont University (yes, Neumont University), a
for-profit institution in Salt Lake City. Spellings
said in a statement that she looks forward to
“announcing my plans for the future of higher education” next Tuesday at a
previously announced luncheon at the National Press Club in Washington.
Inside Higher Ed, September 20, 2006 ---
http://www.insidehighered.com/news/2006/09/20/qt |
"Assessing Learning Outcomes," by Elia Powers, Inside
Higher Ed, September 21, 2006 ---
http://www.insidehighered.com/news/2006/09/21/outcomes
“There is inadequate transparency
and accountability for measuring institutional performance,
which is more and more necessary to maintaining public trust in
higher education.“
“Too many decisions about higher
education — from those made by policymakers to those made by
students and families — rely heavily on reputation and rankings
derived to a large extent from inputs such as financial
resources rather than outcomes.”
Those are the words of the
Secretary of Education’s Commission on the Future of Higher
Education, which on Tuesday handed
over its
final report to Secretary Margaret
Spellings.
Less than a week before Spellings
announces her plans to carry out the commission’s report, a
panel of higher education experts met in Washington on Wednesday
to discuss how colleges and universities report their learning
outcomes now and the reasons why the public often misses out on
this information. On this subject, the panelists’ comments fell
largely in line with those of the federal commission.
The session, hosted by
the Hechinger Institute on Education and the Media,
at Columbia University’s Teachers College, included an
assessment of U.S. News & World Report’s annual college
rankings, which critics say provide too little information about
where students learn best.
“The game isn’t about rankings and
who’s No. 1,” said W. Robert Connor, president of the Teagle
Foundation, a group that has sponsored a
series of grants in “value added assessment,”
intended to measure what students learn in college. Connor said
colleges should be graded on a pass/fail basis, based on whether
they keep track of learning outcomes and if they tell the public
how they are doing.
“We don’t need a matrix of facets
summed up in a single score,” added David Shulenburger, vice
president of academic affairs for the National Association of
State Universities and Land-Grant Colleges.
What students, parents, college
counselors and legislators need is a variety of measuring
sticks, panelists said. Still, none of the speakers recommended
that colleges refuse to participate in the magazine’s rankings,
or that the rankings go away.
“It’s fine that they are out there,”
said Richard Ekman, president of the Council on Independent
Colleges. “Even if it’s flawed, it’s one measure.”
Ekman said the Collegiate Learning
Assessment, which measures educational gains made from a
student’s freshman to senior year, and the National Survey of
Student Engagement, which gauges student satisfaction on
particular campuses, are all part of the full story. (Many
institutions participate in the student engagement survey, but
relatively few of them make their scores public.) Ekman said
there’s no use in waiting until the “perfect” assessment measure
is identified to start using what’s already available.
Still, Ekman said he is “wary about
making anything mandatory,” and doesn’t support any government
involvement in this area. He added that only a small percentage
of his constituents use the CLA. (Some are hesitant because of
the price, he said.)
Shulenburger plugged a yet-to-be
completed index of a college’s performance, called the
Voluntary System of Accountability,
that will compile information including price, living
arrangements, graduation rates and curriculums.
Ross Miller of the Association of
American Colleges & Universities said he would like to see an
organization compile a list of questions that parents and
students can ask themselves when searching for a college. He
said this would serve consumers better than even the most
comprehensive ranking system.
The Spellings commission recommended
the creation of an information database and a search engine that
would allow students and policymakers to weigh comparative
institutional performance.
Miller also said he would like to see
more academic departments publish on their Web sites examples of
student work so that applicants can gauge the nature and quality
of the work they would be doing.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
When Grading Less Is More
https://www.insidehighered.com/news/2019/04/02/professors-reflections-their-experiences-ungrading-spark-renewed-interest-student?utm_source=Inside+Higher+Ed&utm_campaign=aed948ff1a-DNU_2019_COPY_01&utm_medium=email&utm_term=0_1fcbc04421-aed948ff1a-197565045&mc_cid=aed948ff1a&mc_eid=1e78f7c952
When it comes to grading, less is more. So say a number of
scholars who have shared their recent experiments with
“ungrading” in blog posts and on other social media, sparking
renewed discussions about the practice.
“My core
hypothesis was that student learning would actually be improved
by eliminating instructor grading from the course,”
Marcus Schultz-Bergin, assistant lecturer of philosophy at
Cleveland State University, wrote
of going gradeless this
semester in a personal blog post that has since been shared on
the popular philosophy site Daily Nous.
“My
hope” for students, Schultz-Bergin continued, “is that the
reflection they engaged in, and the discussions we had, will
lead to a significant commitment in the second half of the
course to really achieve what they set out for themselves so
that when they tell me they earned an A they can really mean
it.”
Thus far, he added, the experiment in his undergraduate
philosophy of law course "has had its ups and downs. There are
definitely some things I will change going forward, but I do
think the gradeless approach can work well in a course like
this.”
Experts in ungrading say it’s still relatively rare in higher
education, due in part to inertia with respect to pedagogical
innovation, the culture of assessment and professors’ anxieties
about going gradeless. How will students respond? What will
colleagues say? What will administrators think?
Continued in article
Jensen Comment
I would've loved my 40 years of teaching more if I never had to
assign grades (other than maybe Pass/Fail).
But I would've felt that in assigning only a P or an F I
was less professional. Grading is important at most any level of
education. Personally, I worked harder to learn from the fifth grade
onward in courses where teachers were harder graders. Part of it was
probably my competitive nature. But mostly I wanted to bring home
report cards to make my parents proud of me when they signed each
report card.
I don't think I would've liked having to write a letter of
performance for each student who never took an exam. Sure I could
write about innovative ideas students had in essays, but it's very
hard to compare innovative ideas for each and every student since
innovative ideas are often impractical with unachievable goals.
My own experience in as a teacher in college is that competitive
grades were the main motivating factor for my better students and
often even my struggling students who dug in harder to improve their
grades as each semester progressed.
How many students really take a pass/fail course so they won't
have to work as hard in that course?
Grades are a way that students can demonstrate ability when they
tend to do poorly on standardized tests.
You may not be doing minority students any favors when you take away
course grades that show deeper work ethics and abilities.
Some colleges force high schools to choose the top 10% of each
graduating class such as the 10% rule for admissions for automatic
admission in to state-supported Texas universities ---
https://en.wikipedia.org/wiki/Texas_House_Bill_588
How do you select the top 10% of a high school's graduating class if
there are no course grades?
Many graduate schools (including medical schools and law schools)
claim they are looking more heavily into grades to counter poor
standardized test scores like the GRE, GMAT, MCAT, LSAT, etc.
Without grades it would seem to me that they become more reliant on
standardized tests. Letters of recommendation from former professors
are either hard to get in this age of lurking lawyers and in this
age where class sizes are so huge that professors really don't get
to know all their students very well. Letters of recommendations
rarely say anything negative such that if their are 3,000 applicants
to fill 400 slots in a medical school, those letters of
recommendation from Lake Wobegon are of little help in the screening
process ---
https://en.wikipedia.org/wiki/Lake_Wobegon
I'm not saying that students should not be allowed to take an
occasional Pass/Fail course, especially if it's outside their major
field of study. What I am saying is that pass/fail should not go
mainstream.
Bob Jensen's treads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
|
Question
What states (the Seven Sorry Sisters) in the U.S. have the most lax laws
regarding diploma mills?
"Watching a Watchdog’s Words," by Doug Lederman, Inside Higher Ed, August 14,
2008 ---
http://www.insidehighered.com/news/2008/08/15/contreras
Alan Contreras is an increasing rarity these days:
a knowledgeable public official who says what he thinks without worrying too
much about whom he offends. That trait has him in a scrape over free speech
with his superiors in Oregon’s state government. And while they backed away
Thursday from the action that had most troubled him, Contreras isn’t backing
down from the fight.
Contreras oversees the state’s
Office of Degree Authorization, which decides
which academic degrees and programs may be offered within Oregon’s
boundaries. Through his position in that office, which is part of the Oregon
Student Assistance Commission, Contreras has become a widely cited expert
for policy makers and journalists, on issues such as diploma mills,
accreditation, and state regulation of higher education. He also writes
widely on those and other topics for general interest newspapers and higher
education publications — including
Inside Higher Ed.
Some of those writings rub people the wrong way. In
a
2005 essay for Inside Higher Ed, for
instance, Contreras characterized a group of states with comparatively lax
laws and standards on governing low-quality degree providers as the “seven
sorry sisters.” Other columns have
questioned the utility of affirmative action and
discouraged federal intervention in higher education.
In his writings about higher education topics,
Contreras scrupulously notes that his comments are his own, not the state’s.
Contreras’s writings and outspoken comments over
the years have earned him his share of enemies, particularly among
proprietors of unaccredited institutions that he strives to shut down. And
while his wide-ranging opinion making has allowed some critics to write him
off as a gadfly, he testifies as an expert before Congress and delivers
keynote addresses at
meetings of higher education accrediting associations.
Those writings have raised some hackles in Oregon.
About a year ago, Contreras says, Bridget Burns, the appointed head of the
Oregon Student Aid Commission, told Contreras that she wanted him to seek
her approval before he did any outside writing that identified him as a
state employee. Contreras balked, and after numerous discussions among
commission officials in the months that followed, he says, he was told
during his annual review last December that “they realized I had the right
to do my writing,” Contreras says. “I thought it was all done.”
But this week, Contreras says he was contacted by
several acquaintances who had received an annual survey that the commission
does, as part of his annual review, to assess the quality of his and his
office’s work. In addition to the usual two questions of the “how are we
doing?” variety, as Contreras calls them, the survey that began circulating
last week contained two new ones:
- “Alan occasionally writes opinion pieces in
newspapers and professional journals. Do you have any concerns about a
state employee expressing personal opinions in this way?”
- “Do Alan’s writings affect your perception of
OSAC?”
Contreras says that several of those who contacted
him asked him whether he was under fire from his superiors. The official of
one institution that is involved in a case before him, he says, “asked if I
was the victim of a witch hunt by my own agency.” One recipient of the
survey, Michael B. Goldstein, a Washington lawyer who serves on an
accreditation panel with Contreras and has appeared on conference panels
with him, says he was surprised both to have been asked to assess Contreras
and by the tenor of the questions.
“It’s not uncommon for people who work closely with
someone to be asked to comment on his or her performance, but I have never
seen it cast like this to people who are pretty far removed,” Goldstein
says.
Contreras characterizes the commission’s inquiry as
an attempt “to unconstitutionally interfere with my free speech rights under
the Oregon Constitution,” which reads in part: “No law shall be passed
restraining the free expression of opinion, or restricting the right to
speak, write, or print freely on any subject whatever; but every person
shall be responsible for the abuse of this right.” The commission’s inquiry,
he says, “damaged my reputation with the people I work with” in and around
Oregon. “It’s clear that it’s perceived out there as some show of ‘no
confidence’ in me.”
Contreras says that he complained Wednesday to the
staff of Gov. Ted Kulongoski about the commission’s actions, and that he had
asked for Burns’s resignation. Kulongoski’s higher education aide could not
be reached for comment late Thursday.
Public Employees’ Free Speech Rights
The legal situation surrounding the free speech
rights of public employees is in a state of flux. A
2006 Supreme Court decision altered 35 years of
settled jurisprudence by finding that when public employees make statements
that relate to their official duties, “the employees are not speaking as
citizens for First Amendment purposes, and the Constitution does not
insulate their communications from employer discipline,” as Justice Anthony
M. Kennedy wrote in the majority opinion in Garcetti v. Ceballos.
That ruling modified the court’s 1968 decision in
Pickering v. Board of Education, which had
mandated that public employees have a right to speak about matters of public
concern that must be balanced against the government’s ability to operate
effectively and efficiently.
Contreras acknowledges that, both legally (even
under Oregon’s expansive constitutional provision) and otherwise, he might
be on shaky ground if he “went around trashing” the Oregon Student
Assistance Commission’s scholarship and other financial aid programs. “It
would be completely inappropriate for me to go around saying that these
programs are terrible programs and shouldn’t be supported,” he says.
But “99 percent of what I write doesn’t have to do
with anything the agency is doing,” Contreras says. “So what if I said the
University of Oregon’s affirmative action plan is awful, or that the level
of academic planning in most colleges is insufficient. That is legitimate
comment on public policy issues, and it is perfectly normal comment by a
citizen.”
Continued in article
Bob Jensen's threads on diploma mills are at
http://faculty.trinity.edu/rjensen/FraudReporting.htm#DiplomaMill
Bob Jensen's threads on whistle blowing are at
http://faculty.trinity.edu/rjensen/FraudConclusion.htm#WhistleBlowing
Bob Jensen's fraud updates are at
http://faculty.trinity.edu/rjensen/FraudUpdates.htm
Colleges On the Far, Far Left Are Having a Difficult Time With Finances
and Accreditation
"Turmoil at Another Progressive College," by Elizabeth Redden, Inside
Higher Ed, August 1, 2007 ---
http://www.insidehighered.com/news/2007/08/01/newcollege
New
College of California, which, according to its president,
depends on tuition for 95 percent of its budget, finds
itself at this crossroads as the closure of
Antioch College’s main undergraduate institution focuses
attention on
the particular vulnerability of progressive colleges,
which tend to feature small
enrollments, individualized instruction and a commitment to
producing alumni engaged in socially responsible, if not
fiscally rewarding, careers. With a historic focus on
non-traditional education, New College’s graduate and
undergraduate program
offerings today
include women’s spirituality, teacher education, activism
and social change, and experimental performance.
The
college has repeatedly tangled with its accreditor in the
past, with this month’s action coming a year, its president
said, after it was removed from warning. A July 5 letter
from the Western Association to the college’s president of
seven years, Martin J. Hamilton, documents an ongoing
financial crisis about as old as the college itself and a
“pervasive failure” in proper recordkeeping. WASC also notes
concerns about academic integrity at the college, including
a “routine” reliance upon independent study that operates
outside of published criteria or oversight. The accrediting
body indicates that it found “substantial evidence of
violations” of its first standard, that an institution
“function with integrity.” (The
letter is available on the San Francisco Bay Guardian’s
blog).
Continued in article
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Reporting Assessment Data is No Big Deal for
For-Profit Learning Institutions
"What Took You So Long?" by Doug Lederman, Inside Higher Ed, June 15,
2007 ---
http://www.insidehighered.com/news/2007/06/15/cca
You’d
have been hard pressed to attend a major higher education
conference over the last year where the work of the
Secretary of Education’s
Commission on the Future of Higher Education
and the U.S. Education Department’s
efforts to carry it out were not discussed. And they were
rarely mentioned in the politest of terms, with faculty
members, private college presidents, and others often
bemoaning proposals aimed at ensuring that colleges better
measure the learning outcomes of their students and that
they do so in more readily comparable ways.
The annual
meeting of the Career College Association, which represents
1,400 mostly for-profit and career-oriented colleges,
featured its own panel session Thursday on Education
Secretary Margaret Spellings’ various “higher education
initiatives,” and it had a very different feel from
comparable discussions at meetings of public and private
nonprofit colleges. The basic theme of the panelists and the
for-profit college leaders in the audience at the New
Orleans meeting was: “What’s the big deal? The government’s
been holding us accountable for years. Deal with it.”
Ronald S.
Blumenthal, vice president for operations and senior vice
president for administration at Kaplan Higher Education, who
moderated the panel, noted that the department’s push for
some greater standardization of how colleges measure the
learning and outcomes of their students is old hat for
institutions that are accredited by “national” rather than
“regional” accreditors, as most for-profit colleges are. For
nearly 15 years, ever since the Higher Education Act was
renewed in 1992, national accreditors have required
institutions to report placement rates and other data, and
institutions that perform poorly compared to their peers
risk losing accreditation.
“These are
patterns that we’ve been used to for more than 10 years,”
said Blumenthal, who participated on the Education
Department negotiating panel that considered possible
changes this spring in federal rules governing
accreditation. “But the more traditional schools have not
done anything like that, and they don’t want to. They say
it’s too much work, and they don’t have the infrastructure.
We had to implement it, and we did did implement it. So what
if it’s more work?,” he said, to nods from many in the
audience.
Geri S.
Malandra of the University of Texas System, another member
of the accreditation negotiating team and a close adviser to
Charles Miller, who headed the Spellings Commission and
still counsels department leaders, said that nonprofit
college officials (and the news media, she suggested) often
mischaracterized the objectives of the commission and
department officials as excessive standardization.
“Nobody was
ever saying, there is one graduation rate for everyone
regardless of the program,” Malandra said. “You figure out
for your sector what makes sense as the baseline. No matter
how that’s explained, and by whom, the education secretary
or me, it still gets heard as one-size-fits-all, a single
number, a ‘bright line’ ” standard. “I don’t think it was
ever intended that way.”
The third
panelist, Richard Garrett, a senior analyst at Eduventures,
an education research and consulting company, said the lack
of standardized outcomes measures in higher education “can
definitely be a problem” in terms of gauging which
institutions are actually performing well. “It’s easy to
accuse all parts of higher education of having gone too far
down the road of diversity” of missions and measures,
Garrett said.
“On the
other hand,” said Garrett, noting that American colleges
have long been the envy of the world, “U.S. higher education
isn’t the way it is because of standardization. It is as
successful as it is because of diversity and choice and
letting a thousand flowers bloom,” he said, offering a voice
of caution that sounded a lot like what one might have heard
at a meeting of the National Association of Independent
Colleges and Universities or the American Federation of
Teachers.
"Accreditation: A Flawed Proposal," by Alan L. Contreras, Inside Higher Ed,
June 1, 2006 ---
http://www.insidehighered.com/views/2006/06/01/contreras
A recent report released by the
Secretary of Education’s Commission on the Future of Higher Education
recommends some major changes in the way accreditation
operates in the United States. Perhaps the most significant of these is a
proposal that a new accrediting framework “require institutions and programs
to move toward world-class quality” using best practices and peer
institution comparisons on a national and world basis. Lovely words, and
utterly fatal to the proposal.
he principal difficulty with this lofty goal is
that outside of a few rarefied contexts, most people do not want our
educational standards to get higher. They want the standards to get lower.
The difficulty faced by the commission is that public commissions are not
allowed to say this out loud because we who make policy and serve in
leadership roles are supposed to pretend that people want higher standards.
In fact, postsecondary education for most people is
becoming a commodity. Degrees are all but generic, except for those people
who want to become professors or enter high-income professions and who
therefore need to get their degrees from a name-brand graduate school.
The brutal truth is that higher standards, applied
without regard for politics or any kind of screeching in the hinterlands,
would result in fewer colleges, fewer programs, and an enormous decrease in
the number and size of the schools now accredited by national accreditors.
The commission’s report pretends that the concept of regional accreditation
is outmoded and that accreditors ought to in essence be lumped together in
the new Great Big Accreditor, which is really Congress in drag.
This idea, when combined with the commitment to
uniform high standards set at a national or international level, results in
an educational cul-de-sac: It is not possible to put the Wharton School into
the same category as a nationally accredited degree-granting business
college and say “aspire to the same goals.”
The commission attempts to build a paper wall
around this problem by paying nominal rhetorical attention to the notion of
differing institutional missions. However, this is a classic
question-begging situation: if the missions are so different, why should the
accreditor be the same for the sake of sameness? And if all business schools
should aspire to the same high standards based on national and international
norms, do we need the smaller and the nationally accredited business
colleges at all?
The state of Oregon made a similar attempt to
establish genuine, meaningful standards for all high school graduates
starting in 1991 and ending, for most purposes, in 2006, with little but
wasted money and damaged reputations to show for it. Why did it fail?
Statements of educational quality goals issued by the central bureaucracy
collided with the desire of communities to have every student get good
grades and a diploma, whether or not they could read, write or meet minimal
standards. Woe to any who challenge the Lake Wobegon Effect.
So let us watch the commission, and its
Congressional handlers, as it posits a nation and world in which the desire
for higher standards represents what Americans want. This amiable fiction
follows in a long history of such romans a clef written by the elite, for
the elite and of the elite while pretending to be what most people want.
They have no choice but to declare victory, but the playing field will not
change.
Online
Curriculum and Certification
"Online Courses Offered to Smaller Colleges," T.H.E. Journal,
September 2001, Page 16 --- http://www.thejournal.com/magazine/vault/A3621.cfm
Carnegie Technology Education (CTE) is providing
up-to-date curriculum and certification to community and smaller, four-year
colleges. The courses are designed by experts in online curriculum development
in conjunction with faculty at Carnegie Mellon University's School of Computer
Science. CTE combines live classroom instruction with online courses delivered
over an advanced Web-based system that not only provides access at any time or
place, but supports homework, testing, feedback, grading and student-teacher
communication.
CTE serves as a mentor to faculty at partner colleges
through a unique online process, guiding them throughout the teaching
experience and providing help-desk assistance, Internet-based testing,
materials and tools. CTE also promotes faculty development at partner
institutions by helping faculty keep pace with technology changes and
real-world industry demands. The program's online delivery method makes it
possible to constantly update course content, as well as continually improve
the effectiveness of teaching and testing materials.
By allowing colleges to outsource IT curriculum and
faculty training, CTE helps institutions avoid the large investments necessary
to build similar capabilities within their department. CTE's curriculum and
teacher training can also be a competitive advantage to help colleges attract
and retain qualified faculty. Carnegie Technology Education, Pittsburgh, PA,
(412) 268-3535, www.carnegietech.org
.
Accreditation
Alternatives --- http://businessmajors.about.com/library/weekly/aa050499.htm
- AACSB
International
- Association
of Collegiate Business Schools and Programs
- American
Association of Higher Education
- Association
of American Colleges and Universities
- Council
for Higher Education Accreditation
- Isn't
Everybody Accredited? (some of the data may be slightly outdated).
- Links to accreditation issues at http://www.degree.net/guides/accreditation.html
All
About Accreditation: A brief overview of what you
really need to know about accreditation, including GAAP (Generally
Accepted Accrediting Practices). Yes, there really are fake accrediting
agencies, and yes some disreputable schools do lie. This simple set of
rules tells how to sort out truth from fiction. (The acronym is, of
course, borrowed from the field of accounting. GAAP standards are the
highest to which accountants can be held, and we feel that accreditation
should be viewed as equally serious.)
GAAP-Approved
Accrediting Agencies: A listing of all recognized
accrediting agencies, national, regional, and professional, with links
that will allow you to check out schools.
Agencies
Not Recognized Under GAAP: A list of agencies that
have been claimed as accreditors by a number of schools, some totally
phony, some well-intentioned but not recognized.
FAQs:
Some simple questions and answers about accreditation and, especially,
unaccredited schools
"Missed Connections Online colleges complain about traditional
institutions' tough credit-transfer policies," by Dan Carnevale, The
Chronicle of Higher Education, October 18, 2002 --- http://chronicle.com/free/v49/i08/08a03501.htm
TAKING CREDIT
Students who take courses from online colleges that have national
accreditation, rather than the regional accreditation held by most traditional
colleges, often have difficulty transferring their credits to traditional
colleges. Here are some of the institutions that have granted transfer credit,
or have agreed to transfer credits in the future, for courses taught at
American Military University, which is nationally accredited but not
regionally accredited:
- Bellevue
University
- Blue Ridge
Community College
- Boston College
- Capella University
- Coastal Carolina
Community College
- Community College
of the Air Force
- Embry-Riddle
Aeronautical University
- Excelsior College
- Fielding Graduate
Institute
- Hillsborough
Community College
- Hinds Community
College
- Liberty University
- Northwood
University
- Potomac College
- Strayer University
- Texas A&M
University System
- United Nations
Institute for Training and Research
- U.S. Air Force
Officer Training School
- U.S. Army
Management Staff College
- U.S. Merchant
Marine Academy Global Maritime and Transportation School
- University of
Oklahoma
- Western Baptist
College
- West Virginia
University
Many colleges refuse to
grant credit for courses at American Military University, including the
following:
- Northcentral
University
- Park University
- University of
Maryland at College Park
- University of
Maryland University College
- University of
Virginia
Continued at http://chronicle.com/free/v49/i08/08a03501.htm
From
the Syllabus News on December 24, 2001
Commerce Bancorp,
Inc., which calls itself "America's Most Convenient Bank," said
training courses provided through its Commerce University have received
expanded credit recommendations from the American Council on Education (ACE).
The bank, whose employees can receive college credit through the program, has
received credit recommendations for two customer service training programs.
Employees may apply the credit recommendations to college degree programs in
which they are participating. Commerce University offers nearly 1,700 courses
to employees each year via seven schools related to its areas of operation,
including its School of Retail Banking, School of Lending, and School of
Insurance.
For more information,
visit: http://commerceonline.com
Bob Jensen's threads on distance
education and training courses can be found at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
From Infobits on July 27, 2001
VISIBLE KNOWLEDGE PROJECT
The Visible Knowledge Project (VKP) is a five-year
collaborative project focused on "improving the quality of college and
university teaching through a focus on both student learning and faculty
development in technology-enhanced environments."
In the course of the project faculty on twenty-five
campuses will "design and conduct systematic classroom research
experiments focused on how certain student-centered pedagogies, enhanced by a
variety of new technologies, improve higher order thinking skills and
significant understanding in the study of history, literature, culture, and
related interdisciplinary fields."
Resources generated by the project will include: -- a
set of curriculum modules representing the reflective work of the faculty
investigators; -- three research monographs capturing the findings of the
project; -- a set of multimedia faculty development resources; -- a set of
guides, directed at students, for novice learners to better use primary
historical and cultural material on the Internet; and -- a set of online
faculty development and support seminars, for the investigating faculty,
faculty on the core campuses, and graduate students participating in the
Project's professional development programs.
For more information about VKP, link to http://crossroads.georgetown.edu/vkp/
The Visible Knowledge Project is based at Georgetown
University's Center for New Designs in Learning and Scholarship (CNDLS). For
more information about CNDLS, see their website at http://candles.georgetown.edu/
Project partners include the American Studies
Association's Crossroads Project, the Center for History and New Media (George
Mason University), the American Social History Project (CUNY Graduate Center),
the Carnegie Foundation for the Advancement of Teaching, and the TLT Group
with the American Association for Higher Education.
From Infobits on July 27, 2001
NEW JOURNAL ON INFORMATION AND COMPUTER SCIENCES TEACHING AND LEARNING
INNOVATIONS IN TEACHING AND LEARNING IN INFORMATION AND COMPUTER SCIENCES
ELECTRONIC JOURNAL (ITALICS) is a new a peer-reviewed online journal published
by the Learning and Teaching Support Network Centre for Information and
Computer Sciences (LTSN-ICS). ITALICS Electronic Journal will contain papers
on current information and computer sciences teaching, including: developments
in computer-based learning and assessment; open learning, distance learning,
collaborative learning, and independent learning approaches; staff
development; and the impact of subject centers on learning and teaching.
The journal is available, at no cost, at http://www.ics.ltsn.ac.uk/pub/italics/index.html
The Changing Faces of Virtual
Education --- http://www.col.org/virtualed/
Dr. Glen Farrell, Study Team Leader and Editor
The Commonwealth of
Learning
RELEASED IN JULY 2001
by The Commonwealth of Learning (COL): The Changing Faces of Virtual
Education, a study on the latest “macro developments” in virtual
education. This is a follow-up on COL’s landmark study on current trends in
“virtual” delivery of higher education (The Development of Virtual
Education: A global perspective, 1999). Both reports were funded by the
British Department for International Development and are available on this web
site.
One of the
conclusions of the authors of the 1999 report was that the development of
virtual education was “more rhetorical than real!” Dr. Glen Farrell, study
team leader and editor of both reports, says “This follow-up study concludes
that, two years later, virtual education development is a lot more rhetorical,
and a lot more real!”
In terms of the
rhetoric, virtual education is now part of the planning agenda of most
organisations concerned with education and training. And the terminology being
used to describe the activities is even more imprecise and confusing! On the
reality side, there are many more examples of the use of virtual education in
ways that add value to existing, more traditional delivery models. However, a
remarkable feature of this surging interest in virtual education is that it
remains largely focussed on ways to use technology to deliver the traditional
educational products (i.e., programmes and courses) in ways that make them
more accessible, flexible, and cheaper and that can generate revenues for the
institution.
As global discussions
on closing the “digital divide” have observed, it is not surprising that
the report notes that a major feature of the current state of virtual
education development is that it depends on where you live. The growth is
largely occurring in countries with mature economies and established
information and communication infrastructure (ICTs). A lack of such
infrastructure, together with the lack of development capital, means that the
developing countries of the world have not been able to, as yet, use virtual
education models in their efforts to bring mass education opportunities to
their citizens.
However, the report
demonstrates that there are several trends emerging that are likely to bring
about radical changes to the way we think about the concepts of campus,
curriculum, courses, teaching/learning processes, credentials/awards and the
way ICTs can be utilised to enable and support learning. These trends, called
“macro developments” in the report, include new venues for learning, the
use of “learning objects” to define and store content, new organisational
models, online learner support services, quality assurance models for virtual
education and the continuing evolution of ICTs. Each of these “macro
developments” is defined and described in separate chapters of the report.
The final chapter looks at their impact on the development of virtual
education models in the future. While the conclusions will be of general
interest, particular attention has been paid to the role these developments
are likely to have in the evolution of virtual education systems in developing
countries.
The entire study is
available on-line from this page. By clicking on the various hyperlinks below
you will be able to download and open the individual chapters or the entire
book in Acrobat (.PDF) format. (The chapter files are not created with
internal bookmark hyperlinks, but the all-in-one file has bookmarks throughout
for easier navigation.) Acrobat documents can also be resized on screen for
readability but are usually best viewed when printed. Adobe Acrobat version
3.0 is required to download and read the files. With version 4.0 each
Chapter's actual page numbering is retained in Acrobat's "Go To
Page" facility and "Print Range" selections.
The
Changing Faces of Virtual Education
CHAPTER
FILES TO VIEW OR DOWNLOAD IN PDF FORMAT
Preliminary
pages: title page, copyright page, contents (pg.
i-iv) 160kb
Foreword,
Prof. Gajaraj Dhanarajan and Acknowledgements (pg.
v-viii) 120kb
Chapter
1: Introduction, Dr. Glen M. Farrell
(pg. 1-10)
234kb
Chapter
2: The Changing Venues for Learning, Mr. Vis Naidoo
(pg.
11-28) 307kb
Chapter
3: The Continuing Evolution of ICT Capacity: The
Implications for Education,
Dr. Tony Bates (pg.
29-46) 335kb
Chapter
4: Object Lessons for the Web: Implications for
Instructional Development,
Mr. David Porter (pg.
47-70) 639kb
Chapter
5: The Provision of Learner Support Services Online, Dr.
Yoni Ryan (pg.
71-94) 389kb
Chapter
6: The Development of New Organisational Arrangements in
Virtual Learning,
Dr. Peter J. Dirr (pg.
95-124) 448kb
Chapter
7: Quality Assurance, Ms. Andrea Hope
(pg.
125-140) 304kb
Chapter
8: Issues and Choices, Dr. Glen Farrell
(pg.
141-152) 247kb
Note especially that Andrea Hope's
Chapter 7 deals with assessment issues. She mentions three sites that
attempt to week out suspicious degree programs.
degree.net --- http://www.degree.net/
(note the links to accreditation issues at http://www.degree.net/guides/accreditation.html
)
Most of the calls and
e-mail messages we get concern accreditation: What is it, how important is it,
how can you tell if a school's really accredited, and so forth. While
accreditation is a complex and sometimes baffling field, it's really quite
simple to get the basics. This on-line guide offers you:
All
About Accreditation: A brief overview of what you really
need to know about accreditation, including GAAP (Generally Accepted
Accrediting Practices). Yes, there really are fake accrediting agencies, and
yes some disreputable schools do lie. This simple set of rules tells how to
sort out truth from fiction. (The acronym is, of course, borrowed from the
field of accounting. GAAP standards are the highest to which accountants can
be held, and we feel that accreditation should be viewed as equally serious.)
GAAP-Approved
Accrediting Agencies: A listing of all recognized
accrediting agencies, national, regional, and professional, with links that
will allow you to check out schools.
Agencies
Not Recognized Under GAAP: A list of agencies that have
been claimed as accreditors by a number of schools, some totally phony, some
well-intentioned but not recognized.
FAQs:
Some simple questions and answers about accreditation and, especially,
unaccredited schools
AboutEducation at http://www.about.com/education/
WorldwideLearn --- http://www.worldwidelearn.com/
At this site you'll
find hundreds of online courses and learning resources in 46 subject areas
offered by educational institutions, companies and individuals from all over
the world.
Online Training Long
Distance Learning Distance Education eLearning Web-based Training Whatever you
call it - learning online is about you and how you can pursue learning and
education at your convenience. Its learning when you want and where you want.
What do you want to
learn? Do you want to:
get a degree online
train for a new career learn web design find corporate training resources take
professional development courses learn new software continue your education
learn a new skill or hobby
Whatever your goals
are, World Wide Learn is here to help you find the online courses, learning
and education that you want.
Use this site as your
first step towards continuing your education online.
Other training and
education finders are listed at http://faculty.trinity.edu/rjensen/crossborder.htm
Linda Peters provides a frank overview of the various factors underlying
student perceptions of online learning. Such perceptions, she observes, are not
only informed by the student's individual situation (varying levels of computer
access, for instance) but also by the student's individual characteristics: the
student's proficiency with computers, the student's desire for interpersonal
contact, or the student's ability to remain self-motivated ---
Technology Source, a free, refereed, e-journal at http://horizon.unc.edu/TS/default.asp?show=issue&id=44
IN THE SEPTEMBER/OCTOBER 2001 ISSUE
"Improving Student Performance in
Distance Learning Courses," by Judy A. Serwatka, T.H.E. Journal, April
2002, pp. 46-51 --- http://www.thejournal.com/magazine/vault/A4002.cfm
The tests were
particularly problematic. Quizzes were not given for the on-campus course
since it was an introductory course, and the students seemed to keep up well
with the material. But I discovered the online students were not studying the
appropriate material for the tests. To address this, online quizzes were
introduced to the course Web site for the students to take as many times as
they wanted. The scores are not recorded and the questions are in the same
format as on the actual tests, although they are not exactly the same. Ten
questions are chosen randomly from a bank of 20 for each quiz. In addition,
each chapter has its own quiz. Students say they have found these quizzes to
be invaluable.
The tests have been
developed in a manner similar to the quizzes. Each 100-point test is created
from a 200-question test bank. As each student logs in their test is created
randomly from the test bank. This makes cheating extremely difficult because
each test contains different questions. Even if the questions are the same,
they are randomized so they do not appear in the same order. And although the
test is open book, the students are admonished to study, because the questions
are in random order and they do not have time to look up the answers to each
question. The tests are timed and automatically submitted at the end of the
time limit. The addition of these practice quizzes has dramatically improved
performance on the tests.
A point about testing
that should be made is that many educators are concerned about students
finding someone else to take tests for them. I agree with the statement made
by Palloff and Pratt (1999): "Cheating is irrelevant in this process
because the participant would be cheating only him- or herself." Although
attempts are made to minimize the threat, educators should not let this
prevent them from teaching online. Tech-nology will allow educators to verify
the identity of students taking online tests in the future, so educators must
trust students for now.
September 22 message from Craig Polhemus [Joedpo@AOL.COM]
A book by the same authors was included in the AAA's
Faculty Development Bookshelf, which was undergoing a "slow
shutdown" the last I head, so some discounted copies may still be
available
Classroom Assessment Techniques: A Handbook for
College Teachers (2nd Ed), T.A. Angelo and K.P. Cross, Jossey-Bass, San
Francisco , 1993.
( This book is said to be a classic and
provides useful examples of assessment techniques.)
Software
for Online Examinations and Quizzes
November 1, 2012 Respondus message from Richard Campbell
Is the student taking your class the same one who is taking your exams??
Keep an eye on
www.respondus.com
Bob Jensen's threads about online cheating ---
http://faculty.trinity.edu/rjensen/Plagiarism.htm#OnlineCheating
Software for online examinations and quizzes ---
http://faculty.trinity.edu/rjensen/Assess.htm#Examinations
Question
How can I give online examinations?
Answer
If it's a take home test the easiest thing is probably to put an examination up
on a Web server or a Blackboard/WebCT server. For example, you might put up a
Word doc file or an Excel xls file as a take home examination. You can even
embed links to your Camtasia video files in that examination so that video
becomes part of an examination question. Then have each student download the
exam, fill out the answers, and return the file to you via email attachment for
grading. One risk is that the returned file might have a virus even though the
student is not aware that his/her computer added a virus.
In order to avoid the virus risk of files students attach via email, I had an
old computer that I used to open all email attachments from most anybody. Then
in the rare event that the attached file was carrying a virus I did not infect
my main machines. Good virus protection software is essential even on your old
computer.
If students are restricted as to what materials can be used during
examinations or who can be consulted for help, an approach that I used is
examination partnering. I posted quizzes (not full examinations) at a common
time when students were required to take the quiz. Each student was randomly
assigned a partner student such that each partner took the exam in the presence
of a randomly assigned partner. Each student was then required to sign an attest
form saying that his/her partner abided by the rules of the examination. I only
used this for weekly quizzes. Course examinations were given in class with me as
a proctor. Partnered quizzes worked very well in courses where students had to
master software like MS Access. They could perform software usage activities as
part of the quiz.
Giving online interactive examinations via a Web server is more problematic.
A huge problem is that most universities do not allow student feedback on
instructors Web pages. When you fill a shopping cart at an online vendor site
such as Amazon, Amazon is letting you as a customer send a signal back that you
added something to your shopping cart. Amazon allows customers to send signals
back to an Amazon server. Universities do not generally allow this type of
feedback from students on a faculty Web server.
Believe it or not, I resist forwarding advertising. Whenever I communicate
about products, there is no remuneration to me in any way.
The following message is an advertisement, and I have never tried these
products (i.e., no free samples for Bob). But these products do sound
interesting, so I thought you might like to know about them. It's a really
competitive world for vendors of course authoring tools. Products have to have
something special to be "survivors."
I added the product message below to the following sites:
Assessment and Testing --- http://faculty.trinity.edu/rjensen/assess.htm
History of Course Authoring Systems --- http://faculty.trinity.edu/rjensen/290wp/290wp.htm
February 25, 2004 from Leo Lucas [leo@e-learningconsulting.com]
e-Learning Course
Development Kit
Many people use HTML editors
such as Dreamweaver and FrontPage to create e-learning courses. While these
editors are great for creating information they lack essential e-learning
features. The e-Learning Course Development Kit provides these features. The
Kit provides templates to create questions, course-wide navigation, a table
of contents and links for a glossary and other information. The Kit creates
courses that work with SCORM, a standard way to communicate with a Learning
Management System (LMS). The support for SCORM lets you run the course in
multiple sessions, keep track of bookmarks and record the student's progress
through the course. The Kit can be purchased online for $99.
Test Builder
Test Builder lets you author
tests quickly and easily with a text editor. Absolutely no programming is
required. With Test Builder you can create tests and quizzes with
true-false, multiple choice, fill-in-the-blank and matching questions. It
can randomize the sequence of questions and choices and it can randomly
select questions from a question pool. You can limit the number of attempts
and set the passing score. Test Builder supports SCORM. Test Builder
can be purchased online for $149.
We wanted to create e-learning
tools that would work in an academic setting. So we created tools with
these capabilities:
- The tools are affordable.
- They work for the casual user.
You can create a small course or test without much fuss.
- They come with documented
source code so you can modify or extend the tools to meet your specific
needs.
- They add value to your
existing investments in technology. They will deliver courses/tests in a
browser and work with an LMS that supports SCORM 1.2.
Please let me know if you need
more information about these tools. Thanks, Leo
P.S. Your home in the white
mountains is beautiful.
Hi Bob,
I recommend that you take
a look at Exam Builder 4 at http://www.exambuilder.com/
- Web-based
interface, works like Hotmail
- No programming or
html required
- Muliple choice,
Fill-in-the-blank formats, and True or False question types
- 2 Exam Types: Click
and Learn Exams force students to answer the answer correctly before
they can continue to the next question. Educators can optionally provide
instant feedback. Certification Exams allow student to skip
questions, flag questions, review questions answered, and change answers
prior to submitting exam
- All questions are
delivered to students in random order and multiple choice answers are
scrambled to guard against cheating
- Multiple Question
pools per exams to evaluate knowledge gaps with remediation reports
available for students based on performance
- Document Library
to offer instant feedback on incorrect questions
- Ability to upload
graphics to be incorporated in questions
- Students can
easily be grouped into classes
- Detailed reports
on both student results and exam statistics. Every answer a student clicks
on is recorded in the database
- Data archiving and
storage with tape backup for compliance ready solutions
Create a FREE evaluation
account today and be up and running in 5 minutes with no obligation!
My threads on assessment are at http://faculty.trinity.edu/rjensen/assess.htm
Hope this helps!
Bob Jensen
Bob,
I've scheduled a
health economics class in a computer lab this spring. The PCs are configured
with their CRTs tightly packed. I'd like to be able to use the machines to
give quizzes and exams, but the proximity of the CRTs makes at least casual
"peeking" almost a certainty.
Can you suggest or
point me to any software into which I could insert quiz or exam questions that
would > shuffle the order of questions on the screen > shuffle the order
of multiple choice questions > randomize the numbers in quantitative
problems > keep track of the answers > automatically score the responses
and send me a file of grades?
Back in the Apple II
days, there was SuperPilot. But that language does not seem to have been
successful enough to be ported to the IBM PCs say nothing about revised and
improved. ??
Thanks for whatever
thoughts you might be able to share,
Bob XXXXX
February 15, 2003 message from caking [caking@TEMPLE.EDU]
Respondus has exam software for Blackboard, WebCt and
others. I am just now trying it out --- http://www.respondus.com/
Carol King z
Temple University
The term "electroThenic portfolio," or
"ePortfolio," is on everyone's lips. What does this mean?
"The Electronic Portfolio Boom: What's it All About?," by Trent
Batson, Syllabus, December 2002, pp. 14-18 --- http://www.syllabus.com/article.asp?id=6984
(Including Open Knowledge Initiative OKI, Assessment, Accreditation, and Career
Trends)
The term "electroThenic portfolio," or
"ePortfolio," is on everyone's lips. We often hear it associated
with assessment, but also with accreditation, reflection, student resumes, and
career tracking. It's as if this new tool is the answer to all the questions
we didn't realize we were asking.
A portfolio, electronic or paper, is simply an
organized collection of completed work. Art students have built portfolios for
decades. What makes ePortfolios so enchanting to so many is the intersection
of three trends:
- Student work is now mostly in electronic form, or
is based on a canonical electronic file even if it's printed out: papers,
reports, proposals, simulations, solutions, experiments, renditions,
graphics, or just about any other kind of student work.
- The Web is everywhere: We assume (not always true,
of course) that our students have ready access to the Web. The work is
"out there" on the Internet, and therefore the first step for
transferring work to a Web site has already been taken.
- Databases are available through Web sites,
allowing students to manage large volumes of their work. The
"dynamic" Web site that's database-driven, instead of HTML
link-driven, has become the norm for Web developers.
We've reached a critical mass, habits have changed,
and as we reach electronic "saturation" on campus, new norms of work
are emerging. Arising out of this critical mass is a vision of how higher
education can benefit, which is with the ePortfolio.
We seem to be beginning a new wave of technology
development in higher education. Freeing student work from paper and making it
organized, searchable, and transportable opens enormous possibilities for
re-thinking whole curricula: the evaluation of faculty, assessment of
programs, certification of student work, how accreditation works. In short,
ePortfolios might be the biggest thing in technology innovation on campus.
Electronic portfolios have a greater potential to alter higher education at
its very core than any other technology application we've known thus far.
The momentum is building. A year ago, companies I
talked with had not even heard of ePortfolios. But at a focus session in
October, sponsored by Educause's National Learning Infrastructure Initiative (
www.educause.edu/nlii/
), we found out how far this market has come: A number of technology vendors
and publishers are starting to offer ePortfolio tools. The focus session
helped us all see the bigger picture. I came away saying to myself, "I
knew it had grown, but I had no idea by how much!"
ePortfolio developers are making sure that their
platforms can accept the full range of file types and content: text, graphics,
video, audio, photos, and animation. The manner in which student work is
turned in, commented on, turned back to students, reviewed in the aggregate
over a semester, and certified can be—and is being—deeply altered and
unimaginably extended.
This tool brings to bear the native talents of
computers—storage, management of data, retrieval, display, and
communication—to challenge how to better organize student work to improve
teaching and learning. It seems, on the surface, too good to be true.
ePortfolios vs. Webfolios
Since the mid-90s, the term "ePortfolio" or
"electronic portfolio" has been used to describe collections
of student work at a Web site. Within the field of composition
studies, the term "Webfolio" has also been used. In this
article, we are using the current, general meaning of the term, which
is a dynamic Web site that interfaces with a database of student work
artifacts. Webfolios are static Web sites where functionality derives
from HTML links. "E-portfolio" therefore now refers to
database-driven, dynamic Web sites, not static, HTML-driven sites. |
So, What's the Bad News?
Moving beyond the familiar one-semester/one-class limits of managing student
learning artifacts gets us into unfamiliar territory. How do we alter the
curriculum to integrate portfolios? How do we deal with long-term storage,
privacy, access, and ongoing vendor support? What about the challenge of
interoperability among platforms so student work can move to a new campus upon
transfer?
In short, how do we make the ePortfolio an enterprise
application, importing data from central computing, serving the application on
a central, secure server, and managing an ever-enlarging campus system?
Electronic portfolios have great reach in space and time so they will not be
adopted lightly. We've seen how extensively learning management systems such
as WebCT, Blackboard, and Angel can alter our campuses. ePortfolios are much
more challenging for large-scale implementations.
Still, ePortfolio implementations are occurring on
dozens if not hundreds of campuses. Schools of education are especially good
candidates, as they're pressured by accrediting agencies demanding
better-organized and accessible student work. Some statewide systems are
adopting ePortfolio systems as well. The Minnesota State Colleges and
Universities system and the University of Minnesota system have ePortfolios.
Electronic portfolio consortia are also forming. The open-source movement,
notably MIT's Open Knowledge Initiative (OKI), has embraced the ePortfolio as
a key application within the campus computing virtual infrastructure.
Moreover, vendors, in order to establish themselves
as the market begins to take shape, are already introducing ePortfolio tools.
Several companies, including BlackBoard, WebCT, SCT, Nuventive, Concord, and
McGraw-Hill, are said to either have or are developing electronic-portfolio
tools.
ePortfolio Tools and Resources
Within the National Learning Infrastructure Initiative is a group
called The Electronic Portfolio Action Committee (EPAC). EPAC has been
led over the last year by John Ittelson of Cal State Monterey Bay.
Helen Barrett of the University of Alaska at Anchorage, a leading
founder of EPAC, has been investigating uses of ePortfolio tools for
years. MIT's Open Knowledge Initiative (OKI) has provided leadership
and consulting for the group, along with its OKI partner, Stanford
University. The Carnegie Foundation has been active within EPAC, as
have a number of universities.
What follows is a list of ePortfolio tools now available or in
production:
• Epselen Portfolios, IUPUI, www.epsilen.com
• The Collaboratory Project, Northwestern, http://collaboratory.nunet.net
• Folio Thinking: Personal Learning Portfolios, Stanford, http://scil.stanford.edu/research/mae/folio.html
• Catalyst Portfolio Tool, University of Washington, www.catalyst.washington.edu
• MnSCU e-folio, Minnesota State Colleges and Universities, www.efoliomn.com
• Carnegie Knowledge Media Lab, Carnegie Foundation for the
Advancement of Teaching, www.carnegiefoundation.org/kml/
• Learning Record Online (LRO) Project, The Computer Writing and
Research Lab at the University of Texas at Austin, www.cwrl.utexas.edu/~syverson/olr/
contents.html
• Electronic Portfolio, Johns Hopkins University, www.cte.jhu.edu/epweb
• CLU Webfoil, California Lutheran University, www.folioworld.com
• Professional Learning Planner, Vermont Institute for Science,
Math and Technology, www.vismt.org
• Certification Program Portfolio, University of
Missouri-Columbia and LANIT Consulting, https://portfolio.coe.missouri.edu/
• Technology Portfolio and Professional Development Portfolio,
Wake Forest University Department of Education, www.wfu.edu/~cunninac/edtech/technologyportfolio.htm
• e-Portfolio Project, The College of Education at the University
of Florida, www.coe.ufl.edu/school/portfolio/index.htm
• PASS-PORT (Professional Accountability Support System using a
PORTal Approach) University of Louisiana at Lafayette and Xavier
University of Louisiana, www.thequest.state.la.us/training/
• The Connecticut College e-Portfolio Development Consortium, www.union.edu/PUBLIC/ECODEPT/kleind/
conncoll/
• The Kalamazoo College Portfolio, Kalamazoo College, www.kzoo.edu/pfolio
• Web Portfolio, St. Olaf College, www.stolaf.edu/depts/cis/web_portfolios.htm
• The Electronic Portfolio, Wesleyan University, https://portfolio2.wesleyan.edu/names.nsf?login
• The Diagnostic Digital Portfolio (DDP), Alverno College, www.ddp.alverno.edu/
• E-Portfolio Portal, University of Wisconsin-Madison, http://portfolios.education.wisc.edu/
• Web Folio Builder, TaskStream Tools of Engagement, www.taskstream.com
• FolioLive, McGraw-Hill Higher Education, www.foliolive.com
• Outcomes Assessment Solutions, TrueOutcomes, www.trueoutcomes.com/index.html
• Chalk & Wire, www.chalkandwire.com
• LiveText, www.livetext.com
• LearningQuest Professional Development Planner, www.learning-quest.com/
• Folio by eportaro, www.eportaro.com
• Concord (a digital content server for BlackBoard systems), www.concord-usa.com
• iWebfolio by Nuventive (now in a strategic alliance with SCT), www.iwebfolio.com
• Aurbach & Associates, www.aurbach.com/
|
Continued at http://www.syllabus.com/article.asp?id=6984
Grade Inflation Versus Teaching Evaluations
The investigation revealed that 91 percent of
Harvard's students graduated cum laude.
Thomas Bartlett and Paula Wasley, "Just Say 'A': Grade Inflation Undergoes Reality Check:
The notion of a decline in standards draws crusaders and skeptics," Chronicle of Higher Education, September 5,
2008 ---
http://chronicle.com/weekly/v55/i02/02a00104.htm?utm_source=wb&utm_medium=en
How do you measure the best religion? The best
marriage? Hard to say. The same is true in assessing colleges.
Bernard Fryshman, "Comparatively
Speaking," Inside Higher Ed, February 21, 2007 ---
http://www.insidehighered.com/views/2007/02/21/fryshman
"Imagining College Without Grades," by Scott Jaschik, Inside Higher
Ed, January 22, 2009 ---
http://www.insidehighered.com/news/2009/01/22/grades
Kathleen
O’Brien, senior vice president for academic affairs at
Alverno College, said she realized that it might seem like
the panelists were “tilting at windmills” with their vision
for moving past grades. But she said there may be an
alignment of ideas taking place that could move people away
from a sense that grades are inevitable. First, she noted
that several of the nation’s most
prestigious law schools have moved away from traditional
letter grades, citing a sense that
grades were squelching intellectual curiosity. This trend
adds clout to the discussion and makes it more difficult for
people to say that grades need to be maintained because
professional schools value them. Second, she noted that the
growing use of e-portfolios has dramatized the potential for
tools other than grades to convey what students learn.
Third, she noted that just about everyone views grade
inflation as having destroyed the reliability of grades.
Fourth, she said that with more students taking courses at
multiple colleges — including colleges overseas — the idea
of consistent and clear grading just doesn’t reflect the
mobility of students. And fifth, she noted the reactions in
the room, which are typical of academic groups in that most
professors and students are much more likely to complain
about grading than to praise its accuracy or value. This is
a case of an academic practice, she noted, that is
widespread even as many people doubt its utility.
At the same
time, O’Brien said that one thing holding back colleges from
moving was the sense of many people that doing away with
grades meant going easy on students. In fact, she said,
ending grades can mean much more work for both students and
faculty members. Done right, she said, eliminating grades
promotes rigor.
Continued in article
The controversial RateMyProfessor site now links to Facebook entries for
professors
Our new Facebook app lets you to search for, browse and read
ratings of professors and schools. Find out
which professor will inspire you, challenge
you, or which
will just give you the easy A.
RateMyProfessor ---
http://www.ratemyprofessors.com/index.jsp
Bob Jensen's threads on RateMyProfessor are at
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Business ranks at the bottom in terms of having 23% of the responding
students having only 1-5 hours of homework per week!
This in part might explain why varsity athletes choose business as a major
in college.
"Homework by Major," by Mark Bauerlein, Chronicle of Higher
Education, May 5, 2008 ---
http://chronicle.com/review/brainstorm/index.php?id=422
Stephen’s
post last week
about reading complained that students don’t want any more homework, and
their disposition certainly shows up in the surveys. In the 2006
National Survey of Student Engagement,
almost one in five college seniors devoted five hours or less per week to
“Preparing for class,” and 26 percent stood at six to ten hours per week.
College professors say that achievement requires around 25 hours per week of
homework, but only 11 percent reached that mark.
The 2007 NSSE numbers break responses down by
major, and the homework levels for seniors are worth comparing. Here are
numbers for 15 hours or less.
Arts and Humanities majors came in at 16 percent
doing 1-5 hours of homework per week, 25 percent at 6-10 hours, and 20
percent at 11-15 hours.
Biological Sciences: 12 percent do 1-5 hours, 22
percent do 6-10, and 20 percent do 11-15 hours.
Business: 23 percent at 1-5, 30 percent at 6-10,
and 19 percent at 11-15 hours.
Education: 16 percent at 1-5, 27 percent at 6-10,
and 21 percent at 11-15 hours.
Engineering: 10 percent at 1-5, 19 percent at 6-10,
and 17 percent at 11-15 hours.
Physical Science: 12 percent at 1-5 hours, 21
percent at 6-10, and 18 percent at 11-15 hours.
Social Science: 20 percent at 1-5 hours, 28 percent
at 6-10, and 20 percent at 11-15 hours.
Grade Inflation and Dysfunctional Teaching Evaluations (the biggest
scandal in higher education) ---
|http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
MBA students at Cornell University voted for grade nondisclosure, effectively immediately, after years
of agitating for a policy similar to those in place at other highly selective
M.B.A. programs.---
http://www.insidehighered.com/news/2018/09/13/cornell-mba-students-vote-grade-nondisclosure-recruitment?utm_source=Inside+Higher+Ed&utm_campaign=f6d0008933-WNU_COPY_01&utm_medium=email&utm_term=0_1fcbc04421-f6d0008933-197565045&mc_cid=f6d0008933&mc_eid=1e78f7c952
Jensen Comment
Graduates on Cornell's MBA program are honor bound not to disclose grades until
recruiters make them job offers. Presumably Cornell University will provide
transcripts without grades revealed for courses, including courses taken on a
pass-fail basis.
It's not clear, at least to me, that this will also apply to graduates
seeking admission to graduate programs such as Ph.D. programs. This would
greatly hurt some applicants such as those with very high grades who don't do as
well on GMAT or other Ph.D. program admission criteria.
Of course this begs the question of why assign grades at all. If graduates
with low grades can compete equally with graduates with highest grades then
what's the incentive to for a student to go through blood, sweat, tears for high
grades.
What's especially unfortunate is that some socially disadvantaged students
who really dig in and excel with high gpas can no longer compete with the
extroverted smooth talkers who do better in job interviews. Taken to extremes
are those borderline Asperger Syndrome students who earn exceptional grades.
https://en.wikipedia.org/wiki/Asperger_syndrome
I know about this because I have a grandson who is
a straight-A student in his junior year as a computer science major with
Asperger Syndrome. After I tell him and his parents about this you can bet he
will not be applying for Cornell's MBA program. In fact, I question the legality
of this policy under the ADA Act of 1990 ---
https://en.wikipedia.org/wiki/Americans_with_Disabilities_Act_of_1990
Some disabled graduates excel in grades when they do
poorly on many of the other criteria for getting jobs.
Come on Cornell MBA students --- Let's party
Cornell University, like all Ivy League
Universities, is embarrassed by grade inflation where professors give mostly A
grades across the entire campus. Unlike other Ivy Universities, Cornell tried to
shame professors into giving fewer A grades in a five-year experiment. The
result was that the professors could not be shamed into reducing grade inflation
in their courses.
Question
If median grades for each course are made publicly available on the Internet,
will students seek out the high grade average or low grade average courses?
Examples of such postings at Cornell University were previously posted at
http://registrar.sas.cornell.edu/Student/mediangradesA.html
Hypothesis 1
Students will seek out the lower grade average courses/sections thinking that
they have a better chance to compete for high grades.
Hypothesis 2
Students will seek out the higher grade average courses/sections thinking that
particular instructors are easier graders.
However, when Cornell researchers
studied about 800,000+ course grades issued at Cornell from 1990 to 2004, they
found that most students visited the site to shop for classes where the median
grade was higher. Plus, professors who tended to give out higher grades were
more popular. Students with lower SAT scores were the most likely to seek out
courses with higher median grades.
"Easy A's on the Internet: A surprising Cornell experiment in posting grades;
plus a look at recent research into ethical behavior, service charges, and
volunteer habits," by Francesca Di Meglio, Business Week, December 11,
2007 ---
http://www.businessweek.com/bschools/content/dec2007/bs20071211_885308.htm?link_position=link2
In a striking example of
unintended consequences, a move by Cornell University to
give context to student grades by publicly posting median
grades for courses has resulted in exactly the opposite
student behavior than anticipated.
Cornell's
College of Arts & Sciences originally set up a
Web site in 1997 where median
grades were posted, with the intention of also printing
median class grades alongside the grade the student actually
received in the course on his or her permanent transcript.
Administrators thought students would use the information on
the Web site to seek out classes with lower median
grades—because, they reasoned, an A in a class that has a
median grade of B-minus would be more meaningful than say,
an A in a course where the median was A-plus.
Course
Shopping Leads to Grade Inflation
However, when Cornell
researchers studied about 800,000 course grades issued at
Cornell from 1990 to 2004, they found that most students
visited the site to shop for classes where the median grade
was higher. Plus, professors who tended to give out higher
grades were more popular. Students with lower SAT scores
were the most likely to seek out courses with higher median
grades.
This "shopping"
in turn led to grade inflation, Vrinda Kadiyali, associate
professor of marketing and economics at Cornell's
Johnson Graduate School of Management,
one of the authors, explained in an
interview. The study, which is undergoing peer review, has
not yet been published.
So far, however, the
university has posted the median course grades only on the
Internet and has not yet put those grades on transcripts.
According to an article in the Cornell Daily Sun,
the school will start posting the grades on transcripts in
the spring. School officials were not immediately available
for comment.
The research team
hopes the school follows through on its plans. "That will
allow Cornell to hold itself to a higher standard because it
lets potential employers know where students stand relevant
to other students," says Kadiyali.
The presence of the
median grade data is well-known to students but less
well-known to faculty. The researchers themselves were
prompted to do the study when one of them learned of the Web
site from a student questioning grades in her course.
Kadiyali says
the formula the researchers used to come to these
conclusions could easily be applied to Internet teacher
rating sites, such as
ratemyprofessors.com. It's
something educators should consider, she adds, to find out
how these posts affect the decision-making of students and,
thus, professors and their courses.
Jensen Comment
The problem is that, in modern times, grades are the keys to the kingdom (i.e.,
keys unlocking the gates of graduate studies and professional careers) such that
higher grades rather than education tend to become the main student goals. A
hundred years ago, just getting a degree could open postgraduate gates in life
because such a small proportion of the population got college diplomas. With
higher percentages of the population getting college diplomas, high grades
became keys to the kingdom. In many colleges a C grade is viewed as very nearly
a failing grade.
At the same time, formal teaching evaluations and teacher
rating sites like
ratemyprofessors.com have led to marked grade inflation in virtually all
colleges. The median grades are often A, A-, B+, or B. The poor student's C
grade is way below average. Just take a look at these course medians from
Cornell University ---
http://registrar.sas.cornell.edu/Grades/MedianGradeSP07.pdf
December 19, 2007eply from a good friend who is also a
university-wide award winning teacher
I'm not for easy grading, but I also
wonder some about this study. Could it be that the MORE EFFECTIVE
instructors are also easier graders and vice versa? I have no idea, but I'd
like to see a control for this variable.
And God help us if a professor is
popular! What an awful trait for an educator to have!
Jeez!
December 20, 2007 reply from Bob Jensen
Dear Jeez,
The terms "easy grader" and "easy grading" are
probably not suited for hypothesis testing. They are too hard to precisely
define. Some, probably most, "easy graders" counter by saying that they are
just better teachers and the students learned more because of superior
teaching. In many cases, but certainly not all cases, this is probably true.
Also, it is almost impossible to distinguish easy grading from easy content.
Students may learn everything in a course if the course is easy enough to do
so.
Instructors will also counter that they are ethical in
the sense of scaring off the poor students before the course dropping
deadlines. Instructors who snooker poor students to stay in their courses
and then hammer them down later on can show lower median grades without
punishing better students with C grades. Fortunately I don't think there are
many instructors who do this because they then face the risk of getting
hammered on teaching evaluations submitted by the worst students in the
course.
Easy grading/content is a lot like pornography. It's
probably impossible to precisely define but students know it when they shop
for easier courses before registering. It may be possible to a limited
extent to find easy graders in multiple section courses having common
examinations. For example, I was once a department chair where our two basic
accounting courses had over 30 sections each per semester. But even there it
is possible that all instructors were relatively "easy" when they put
together the common examinations.
It is widely known that nearly every college in the
U.S. suffers from grade inflation. Only an isolated few have been successful
in holding it down. College-wide grade averages have swung way above C
grades and in some instances even B grades. It is typical any more for
median grades of a college to hit the B+ or A- range, and in many courses
the median grade is an A.
The Cornell study sited above covering 800,000 course
grades (a lot) did not identify easy graders. It identified courses/sections
having higher median grades. Higher median grades may not signify easy
grading or easy content, but students seem to know what they are shopping
for and the Cornell study found that students do shop around for bargains.
My guess is that the last courses left on the shelf are those with median
grades in the C range.
Bob Jensen
Bob Jensen's threads on grade inflation
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
When Grading Less Is More
https://www.insidehighered.com/news/2019/04/02/professors-reflections-their-experiences-ungrading-spark-renewed-interest-student?utm_source=Inside+Higher+Ed&utm_campaign=aed948ff1a-DNU_2019_COPY_01&utm_medium=email&utm_term=0_1fcbc04421-aed948ff1a-197565045&mc_cid=aed948ff1a&mc_eid=1e78f7c952
When it comes to grading, less is more. So say a number of scholars who have
shared their recent experiments with “ungrading” in blog posts and on other
social media, sparking renewed discussions about the practice.
“My core hypothesis was
that student learning would actually be improved by eliminating instructor
grading from the course,” Marcus Schultz-Bergin, assistant lecturer
of philosophy at Cleveland State University, wrote
of going gradeless this
semester in a personal blog post that has since been shared on the popular
philosophy site Daily Nous.
“My
hope” for students, Schultz-Bergin continued, “is that the reflection they
engaged in, and the discussions we had, will lead to a significant
commitment in the second half of the course to really achieve what they set
out for themselves so that when they tell me they earned an A they can
really mean it.”
Thus far, he added, the experiment in his undergraduate philosophy of law
course "has had its ups and downs. There are definitely some things I will
change going forward, but I do think the gradeless approach can work well in
a course like this.”
Experts in ungrading say it’s still relatively rare in higher education, due
in part to inertia with respect to pedagogical innovation, the culture of
assessment and professors’ anxieties about going gradeless. How will
students respond? What will colleagues say? What will administrators think?
Continued in article
Jensen Comment
I would've loved my 40 years of teaching more if I never had to assign grades
(other than maybe Pass/Fail).
But I would've felt that in assigning only a P or an F I was less
professional. Grading is important at most any level of education. Personally, I
worked harder to learn from the fifth grade onward in courses where teachers
were harder graders. Part of it was probably my competitive nature. But mostly I
wanted to bring home report cards to make my parents proud of me when they
signed each report card.
I don't think I would've liked having to write a letter of performance for
each student who never took an exam. Sure I could write about innovative ideas
students had in essays, but it's very hard to compare innovative ideas for each
and every student since innovative ideas are often impractical with unachievable
goals.
My own experience in as a teacher in college is that competitive grades were
the main motivating factor for my better students and often even my struggling
students who dug in harder to improve their grades as each semester progressed.
How many students really take a pass/fail course so they won't have to work
as hard in that course?
Grades are a way that students can demonstrate ability when they tend to do
poorly on standardized tests. You may not be doing
minority students any favors when you take away course grades that show deeper
work ethics and abilities.
Some colleges force high schools to choose the top 10% of each graduating
class such as the 10% rule for admissions for automatic admission in to
state-supported Texas universities ---
https://en.wikipedia.org/wiki/Texas_House_Bill_588
How do you select the top 10% of a high school's graduating class if there are
no course grades?
Many graduate schools (including medical schools and law schools) claim they
are looking more heavily into grades to counter poor standardized test scores
like the GRE, GMAT, MCAT, LSAT, etc. Without grades it would seem to me that
they become more reliant on standardized tests. Letters of recommendation from
former professors are either hard to get in this age of lurking lawyers and in
this age where class sizes are so huge that professors really don't get to know
all their students very well. Letters of recommendations rarely say anything
negative such that if their are 3,000 applicants to fill 400 slots in a medical
school, those letters of recommendation from Lake Wobegon are of little help in
the screening process ---
https://en.wikipedia.org/wiki/Lake_Wobegon
I'm not saying that students should not be allowed to take an occasional
Pass/Fail course, especially if it's outside their major field of study. What I
am saying is that pass/fail should not go mainstream.
Bob Jensen's treads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
Partly because he was fed up with childish comments
on Web sites where students rate their professors, a business-school
professor at Temple University has created an online forum for students who
want to sound off. So as not to mislead students, the site’s title suggests
its intent: “Thank You Professor.”
“There are so many vehicles for students to express
their opinion,” says the site’s creator,
Samuel D. Hodge Jr., chairman of the business
school’s legal-studies department. “But there’s nothing really at the school
where the professor can get a letter directly from the student.”
When the site went live on May 1, Mr. Hodge says,
he expected about a dozen comments in the first week. Instead, more than 200
flooded in. He converts each note into a letter to the faculty member being
praised, then makes sure the business school’s dean gets a copy.
Mr. Hodge moderates the comments, but so far there
haven’t been any negative posts on
the site, he says.
For example, the four “thank you notes” left on the
site so far for
Rob B. Drennan Jr., an associate professor of
risk, insurance, and health-care management, have been uniformly laudatory
(three were signed, and one was anonymous). “I truly enjoyed his class,”
wrote one student, Tom Coia. “Difficult and challenging, but isn’t that what
we want from school?” Contrast that to an anonymous comment concerning Mr.
Drennan that a student left last spring on
RateMyProfessors.com: “BOOOOO!!!!!”
Mr. Hodge, incidentally,
has appeared on an MTV
Web site of faculty members who “strike back” against comments on
RateMyProfessors.com. He says Ohio State University is the only other
institution he knows of that gives students a way to thank their professors
on the Web.
Temple may extend the site to the whole university,
he says: “It’s such positive reinforcement."
Bob Jensen's threads on teaching evaluations are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#RateMyProfessor
Chocolate Coated Teaching Evaluations
A new study shows that giving students chocolate leads
to improved results for professors. “Fudging the Numbers: Distributing Chocolate
Influences Student Evaluations of an Undergraduate Course,” is set to be
published in an upcoming edition of the journal Teaching of Psychology. While
they were graduate students at the University of Illinois at Chicago, the
paper’s authors, Benjamin Jee and Robert Youmans, became interested in what kind
of environment instructors created right before handing out the evaluations.
Their theory: Outside factors could easily play a role in either boosting or
hurting a professor’s rating.
Elia Powers, "Sweetening the Deal," Inside Higher Ed, October 18, 2007
---
http://www.insidehighered.com/news/2007/10/18/sweets
Jensen Comment
One of my former colleagues left a candy dish full of chocolate morsels outside
her door 24/7. She also had very high teaching evaluations. At last I know the
secret of her success. I can vouch for the fact that his dish of chocolate, plus
her chocolate chip cookies the size of pancakes, also greatly improved relations
with at least one senior faculty member.
On a somewhat more serious side of things there is evidence, certainly not
in the case of my cookie-baking colleague, that grade inflation is also linked
to efforts to affect teaching evaluations in recent years. See below.
The
problem is that our students choose very bland, low nourishment diets in our
modern day smorgasbord curricula. Their concern is with their grade averages
rather than their education. And why not? Grades for students and turf for
faculty have become the keys to the kingdom!
Bob Jensen
"Are Undergraduates Actually Learning Anything?" by Richard Arum and
Josipa Roksa. Chronicle of Higher Education, January 18, 2011 ---
http://chronicle.com/article/Are-Undergraduates-Actually/125979/
Drawing on survey responses, transcript data, and
results from the Collegiate Learning Assessment (a standardized test taken
by students in their first semester and at the end of their second year),
Richard Arum and Josipa Roksa concluded that a significant percentage of
undergraduates are failing to develop the broad-based skills and knowledge
they should be expected to master. Here is an excerpt from Academically
Adrift: Limited Learning on College Campuses (University of Chicago Press),
their new book based on those findings.
Continued in article
Our Compassless Colleges: What are students really not learning?
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Berkowitz
Question
What factors most heavily influence student performance and desire to take more
courses in a given discipline?
Answer
These outcomes are too complex to be predicted very well. Sex and age of
instructors have almost no impact. Teaching evaluations have a very slight
impact, but there are just too many complexities to find dominant factors
cutting across a majority of students.
Oreopoulos said the findings bolster a conclusion he
came to in a previous academic paper that subjective qualities, such as how a
professor fares on student evaluations, tell you more about how well students
will perform and how likely they are to stay in a given course than do
observable traits such as age or gender. (He points out, though, that even the
subjective qualities aren’t strong indicators of student success.) “If I were
concerned about improving teaching, I would focus on hiring teachers who perform
well on evaluations rather than focus on age or gender,” he said.
Elia Powers, "Faculty Gender and Student Performance," Inside Higher Ed,
June 21, 2007 ---
http://www.insidehighered.com/news/2007/06/21/gender
Jensen Comment
A problem with increased reliance on teaching evaluations to measure performance
of instructors is that this, in turn, tends to grade inflation --- See below.
Professors of the Year
The Council for Advancement and Support of Education
and the Carnegie Foundation for the Advancement of Teaching announced today
winners of their annual
U.S. Professors of the Year award, given to
instructors who show dedication to undergraduate teaching and mentoring.
Elia Powers, Inside Higher Ed, November 15, 2007 ---
http://www.insidehighered.com/news/2007/11/15/topprofs
Jensen Comment
Although "professors of the year" are chosen by peers are often teach popular
courses, there are possibly more popular courses that are taught by instructors
who will never win awards given by peers.
It is somewhat revealing (a little about the professor and a lot about the
RateMyProfessor site) to read the student comments on RateMyProfessor. The
"hottest" professors at RateMyProfessor generally have many more evaluations
submitted than the four Professors of the Year" listed below. You can find a
listing of the "hottest" professors (Top 50) at
http://www.ratemyprofessors.com/top50Profs.jsp?from=1&to=25&tab=hottest_top50
- The Rank 1 U.S. Professor of the Year as ranked by peers and judges is
Glen Ellis at Smith College. He only has seven student evaluations at
RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=191487
- The Rank 2 U.S. Professor of the Year as ranked by peers and judges is
Rosemary Karr at Collin County Community College in Texas. She only has 25
student evaluations RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=207154
I had to chuckle at the student who said:
"I got a 68 in her class
and went to her office for tutorials 3 times a week, still didnt pass me.
she pickes favorites."
- The Rank 3 U.S. Professor of the Year as ranked by peers and judges is
Chris Sorensen at Kansas State University. There are 760 instructors
evaluated from KSU on RateMyProfessor, but apparently not one of Sorensen's
students submitted an evaluation. There were 11 professors with evaluations
from Sorensen's Department of Physics, but Sorensen was not on the list.
- The Rank 4 U.S. Professor of the Year as ranked by peers and judges is
Carlos G. Spaht at Louisiana State University in Shreveport. He only has 16
student evaluations RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=329076
For Trivia Buffs and Serious Researchers
Thousands of College Instructors Ranked on Just About Everything
November 13, 2007 message from David Albrecht
[albrecht@PROFALBRECHT.COM]
There is a popular teacher in my department. When
this fellow teaches a section of a multi-section course, his section fills
immediately and there is a waiting list. My department does not like an
imbalance in class size, so they monitor enrollment in his section. No one
is permitted to add his section until all other sections have at least one
more students than his.
I'm concerned about student choice, about giving
them a fair chance to get into his section instead of the current random
timing of a spot opening up in his section.
Does anyone else have this situation at your
school? How do you manage student sign-ups for a popular teacher? Any
practical suggestions would be greatly appreciated.
David Albrecht
Bowling Green
November 14, 2007 reply from Bob Jensen
Hi David,
I think the first thing to study is what makes an instructor so popular.
There can be good reasons (tremendous preparation, inspirational, caring,
knowing each student) and bad reasons (easy grader, no need to attend
class), and questionable without ipso facto being good or bad (entertaining,
humorous).
The RateMyProfessor site now has some information on most college
instructors in a number of nations ---
http://www.ratemyprofessors.com/index.jsp The overwhelming factor
leading to popularity is grading since the number one concern in college
revealed by students is grading. Of course there are many problems in this
database and many instructors and administrators refuse to even look at
these RateMyProfessor archives. Firstly, student reporting is self
selective. The majority of students in any class do not submit evaluations.
A fringe element (often outliers for and against) tends to provide most of
the information. Since colleges do know the class sizes, it is possible to
get an idea about "sample" size, although these are definitely not a random
samples. It's a little like book and product reviews in Amazon.com.
There are both instructors who are not rated at all on RateMyProfessor
and others who are too thinly rated (e.g., less than ten evaluations) to
have their evaluations taken seriously. For example, one of my favorite
enthusiastic teachers is the award-winning Amy Dunbar who teaches tax at the
University of Connecticut. Currently there are 82 instructors in the
RateMyProfessor archives who are named Dunbar. But not a single student
evaluation has apparently been sent in by the fortunate students of Amy
Dunbar. Another one of my favorites is Dennis Beresford at the University of
Georgia. But he only has one (highly favorable) evaluation in the archives.
I suspect that there's an added reporting bias. Both Amy and Denny mostly
teach graduate students. I suspect that graduate students are less inclined
to fool with RateMyProfessor.
Having said this, there can be revealing information about teaching
style, grading, exam difficulties, and other things factoring into good and
bad teaching. Probably the most popular thing I've noted is that the
top-rated professors usually get responses about making the class "easy."
Now that can be taken two ways. It's a good thing to make difficult material
seem more easy but still grade on the basis of mastering the difficult
material. It is quite another thing to leave out the hard parts so students
really do not master the difficult parts of the course.
If nothing else, RateMyProfessor says a whole lot about the students we
teach. The first thing to note is how these college-level students often
spell worse than the high school drop outs. In English classes such bad
grammar may be intentional, but I've read enough term papers over the years
to know that dependence upon spell checkers in word processors has made
students worse in spelling on messages that they do not have the computer
check for spelling. They're definitely Fonex spellers.
Many students, certainly not all, tend to prefer easy graders. For
example, currently the instructor ranked Number 1 in the United States by
RateMyProfessor appears to be an easy grader, although comments by only a
few individual students should be taken with a grain of salt. Here's Page
One (five out of 92 evaluations) of 19 pages of summary evaluations at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=23294
11/13/07 |
HIST101 |
5 |
5 |
5 |
5 |
|
easiest teacher EVER |
11/12/07 |
abcdACCT |
1 |
1 |
1 |
1 |
|
good professor |
11/11/07 |
HistGacct |
3 |
2 |
4 |
1 |
|
Good teacher. Was enjoyable to heat teach. Reccomend class.
Made my softmore year. |
11/10/07 |
HISTACCT |
5 |
5 |
5 |
5 |
|
Very genious. |
11/8/07 |
histSECT |
3 |
5 |
4 |
4 |
|
amazing. by far the greatest teacher. I had him for Culture
and the Holocust with Schiffman and Scott. He is a genius. love
him. |
Does it really improve ratings to not make students have presentations?
Although making a course easy is popular, is it a good thing to do? Here are
the Page 3 (five out of 55 evaluations) ratings of the instructor ranked
Number 2 in the United States:
12/21/05 |
Spanish 10
2 |
3 |
5 |
5 |
5 |
|
One of the best professors that I have ever had. Homework is
taken up on a daily base but, grading is not harsh. No
presentations. |
11/2/05 |
SPA 102 |
4 |
5 |
5 |
3 |
|
Wow, a great teacher. Totally does not call people out and
make them feel stupid in class, like a lot of spanish teachers.
The homework is super easy quiz grades that can be returned with
corrections for extra points. You have to take her for Spa
102!!!! You actually learn in this class but is fun too! |
10/27/05 |
Span 102 |
4 |
5 |
5 |
5 |
|
I love Senora Hanahan. She is one of the best teachers I
ever had. She is very clear and she is super nice. She will go
out of her way just to make sure that you understand. I Love
Her! I advise everyone to take her if you have a choice. She is
great!! |
9/14/05 |
SPA 201 |
4 |
5 |
5 |
5 |
|
I am absolutly not suprised that Senora Hanahan has smiley
faces on every rating. She is awesme and fun. |
8/25/05 |
SPA 102 |
4 |
5 |
5 |
5 |
|
I LOVE her! Absolutely wonderful! Goes far out of her way to
help you and remembers your needs always. She will call you at
home if you tell her you need help, and she will do everything
possible to keep you on track . I have no IDEA how she does it!
She really wants you to learn the language. She's pretty and fun
and absolutely wonderful! |
Students, however, are somewhat inconsistent about grading and exam
difficulties. For example, read the summary outcomes for the instructor
currently ranked as Number 8 in the United States ---
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
Note this is only one page out of ten pages of comments:
10/31/07 |
hpd110 |
5 |
3 |
2 |
4 |
|
she is pushing religion on us too much... she should be more
open minded. c-lots is always forcing her faith based lessons
down our throats. she makes me wanna puke. |
10/14/07 |
PysEd100 |
1 |
1 |
1 |
1 |
|
She is no good in my opinion. |
5/22/07 |
HPD110 |
5 |
5 |
5 |
5 |
|
Dr. Lottes is amazing! it is almost impossible to get lower
than an A in her class as long as you show up. her lectures are
very interesting and sometimes it's almost like going to
therapy. the tests and activities are easy and during the test
there are group sections so it'll help your test grades. she is
very outgoing and fun! so take her! |
12/7/06 |
HDP070 |
2 |
5 |
5 |
2 |
|
Grades the class really hard, don't take if you are not
already physically fit. Otherwise, she's an amazing teacher. You
can tell she really cares about her students. |
Read the rest of the comments at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
It's possible to look up individual colleges and I looked up Bowling
Green State University which is your current home base David. There are
currently 1,322 instructors rated at Bowling Green. I then searched by the
Department of Accounting. There are currently ten instructors rated. The
highest rated professor (in terms of average evaluations) has the following
Page One evaluations:
4/9/07 |
mis200 |
4 |
5 |
5 |
1 |
i admit, i don't like the class (mis200) since i think it
has nothing to do with my major. but mr. rohrs isn't that hard,
and makes the class alright. |
4/5/07 |
mis200 |
3 |
4 |
4 |
1 |
Other prof's assign less work for this class, but his
assignments aren't difficult. Really nice guy, helpful if you
ask, pretty picky though. |
4/4/07 |
Acct102 |
2 |
5 |
5 |
2 |
Easy to understand, midwestern guy. Doesn't talk over your
head. |
12/14/06 |
mis200 |
4 |
5 |
5 |
2 |
Kind of a lot of work but if you do good on it you will def
do good...real cool guy |
12/10/06 |
BA150 |
4 |
5 |
5 |
4 |
Mr. Rohrs made BA 150 actually somewhat enjoyable. He is
very helpful and makes class as interesting as possible. He is
also very fair with grading. Highly Recommend. |
Your evaluations make me want to take your classes David. However, only
36 students have submitted evaluations. My guess is that over the same years
you've taught hundreds of students. But my guess is that we can extrapolate
that you make dull old accounting interesting and entertaining to students.
In answer to your question about dealing with student assignments to
multiple sections I have no answers. Many universities cycle the
pre-registration according to accumulated credits earned.. Hence seniors
sign up first and first year students get the leftovers. Standby signups are
handled according to timing much like airlines dole out standby tickets.
It is probably a bad idea to let instructors themselves add students to
the course. Popular teachers may be deluged with students seeking favors,
and some instructors do not know how to say no even though they may be
hurting other students by admitting too many students. Fortunately, classes
are generally limited by the number of seats available. Distance education
courses do not have that excuse for limiting class size.
PS
For research and sometimes entertainment, it's interesting to read the
instructor feedback comments concerning their own evaluations of
RateMyProfessor ---
http://www.mtvu.com/professors_strike_back/
You can also enter the word "humor" into the top search box and
investigate the broad range of humor and humorous styles of instructors.
Bob Jensen
Also see the following:
Bob Jensen's threads on the dysfunctional aspects of teacher evaluations
on grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#GradeInflation
Question
What topic dominates instructor evaluations on RateMyProfessors.com (or RATE for
short)?
"RateMyProfessors — or His Shoes Are Dirty," by Terry Caesar, Inside
Higher Ed, July 28, 2006 ---
http://www.insidehighered.com/views/2006/07/28/caesar
But the trouble begins here. Like those guests,
students turn out to be candid about the same thing. Rather than sex, it’s
grades. Over and over again, RATE comments cut right to the chase: how easy
does the professor grade? If easy, all things are forgiven, including a dull
classroom presence. If hard, few things are forgiven, especially not a dull
classroom presence. Of course we knew students are obsessed with grades. Yet
until RATE could we have known how utterly, unremittingly, remorselessly?
And now the obsession is free to roam and cavort,
without the constraints of the class-by-class student evaluation forms, with
their desiderata about the course being “organized” or the instructor having
“knowledge of subject matter.” These things still count. RATE students
regularly register them. But nothing counts like grades. Compared to RATE,
the familiar old student evaluation forms suddenly look like searching
inquiries into the very nature of formal education, which consists of many
other things than the evaluative dispositions of the professor teaching it.
What other things? For example, whether or not the
course is required. Even the most rudimentary of student evaluation forms
calls for this information. Not RATE. Much of the reason a student is free
to go straight for the professorial jugular — and notwithstanding all the
praise, the site is a splatfest — is because course content can be merrily
cast aside. The raw, visceral encounter of student with professor, as
mediated through the grade, emerges as virtually the sole item of interest.
Of course one could reply: so what? The site
elicits nothing else. That’s why it’s called, “rate my professors,” and not
“rate my course.” In effect, RATE takes advantage of the slippage always
implicit in traditional student evaluations, which both are and are not
evaluations of the professor rather than the course. To be precise, they are
evaluations of the professor in terms of a particular course. This
particularity, on the other hand, is precisely what is missing at the RATE
site, where whether or not a professor is being judged by majors — a crucial
factor for departmental and college-wide tenure or promotion committees who
are processing an individual’s student evaluations — is not stipulated.
Granted, a student might bring up being a major. A
student might bring anything up. This is why RATE disappoints, though,
because there’s no framework, not even that of a specific course, to
restrain or guide student comments. “Sarcastic” could well be a different
thing in an upper-division than in a lower-division course. But in the
personalistic RATE idiom, it’s always a character flaw. Indeed, the purest
RATE comments are all about character. Just as the course is without
content, the professor is without performative ability. Whether he’s a “nice
guy” or she “plays favorites,” it’s as if the student has met the professor
a few times at a party, rather than as a member of his or her class for a
semester.
RATE comments are particularly striking if we
compare those made by the professor’s colleagues as a result of classroom
observations. Many departments have evolved extremely detailed checksheets.
I have before me one that divides the observation into four categories,
including Personal Characteristics (10 items), Interpersonal Relationships
(8), Subject Application/Knowledge (8), and Conducting Instruction (36). Why
so many in the last category? Because performance matters — which is just
what we tell students about examinations: each aims to test not so much an
individual’s knowledge as a particular performance of that knowledge.
Of course, some items on the checksheet are of
dubious value, e.g. “uses a variety of cognitive levels when asking
questions.” So it goes in the effort to itemize successful teaching, an
attempt lauded by proponents of student evaluations or lamented by critics.
The genius of RATE is to bypass the attempt entirely, most notoriously with
its “Hotness Total.” Successful teaching? You may be able to improve
“helpfulness” or “clarity.” But you can’t very well improve “hotness.”
Whether or not you are a successful teacher is not safely distant at RATE
from whether or not you are “hot.”
Perhaps it never was. In calling for a temperature
check, RATE may merely be directly addressing a question — call it the
charisma of an individual professor — that traditional student evaluations
avoid. If so, though, they avoid it with good reason: charisma can’t be
routinized. When it is, it becomes banal, which is one reason why the
critical comments are far livelier than the celebratory ones. RATE winds up
testifying to one truism about teaching: It’s a lot easier to say what good
teaching isn’t than to say what it is. Why? One reason is, because it’s a
lot easier for students who care only about teachers and not about teaching
to say so.
Finally, what about these RATE students? How many
semester hours have they completed? How many classes did they miss? It is
with good reason (we discover) that traditional student evaluation forms are
careful to ask something about each student. Not only is it important for
the administrative processing of each form. Such questions, even at a
minimal level, concede the significance in any evaluation of the evaluating
subject. Without some attention to this, the person under consideration is
reduced to the status of an object — which is, precisely, what the RATE
professor becomes, time after time. Students on RATE provide no information
at all about themselves, not even initials or geographical locations, as
given by many of the people who rate books and movies on amazon.com or who
give comments on columns and articles on this Web site.
In fact, students at RATE don’t even have to be
students! I know of one professor who was so angered at a comment made by
one of her students that she took out a fake account, wrote a more favorable
comment about herself, and then added more praise to the comments about two
of her colleagues. How many other professors do this? There’s no telling —
just as there’s no telling about local uses of the site by campus
committees. Of course this is ultimately the point about RATE: Even the
student who writes in the most personal comments (e.g. “hates deodorant") is
completely safe from local retribution — never mind accountability — because
the medium is so completely anonymous.
Thus, the blunt energies of RATE emerge as cutting
edge for higher education in the 21st century. In this respect, the degree
of accuracy concerning any one individual comment about any one professor is
beside the point. The point is instead the medium itself and the nature of
the judgements it makes possible. Those on display at RATE are immediate
because the virtual medium makes them possible, and anonymous because the
same medium requires no identity markers for an individual. Moreover, the
sheer aggregation of the site itself — including anybody from anywhere in
the country — emerges as much more decisive than what can or cannot be said
on it. I suppose this is equivalent to shrugging, whatever we think of RATE,
we now have to live with it.
I think again of the very first student evaluation
I received at a T.A. The result? I no longer remember. Probably not quite as
bad as I feared, although certainly not as good as I hoped. The only thing I
remember is one comment. It was made, I was pretty sure, by a student who
sat right in the front row, often put her head down on the desk (the class
was at 8 a.m.) and never said a word all semester. She wrote: “his shoes are
dirty.” This shocked me. What about all the time I had spent, reading,
preparing, correcting? What about how I tried to make available the best
interpretations of the stories required? My attempts to keep discussions
organized, or just to have discussions, rather than lectures?
All irrelevant, at least for one student? It seemed
so. Worse, I had to admit the student was probably right — that old pair of
brown wingtips I loved was visibly becoming frayed and I hadn’t kept them
shined. Of course I could object: Should the state of a professor’s shoes
really constitute a legitimate student concern? Come to this, can’t you be a
successful teacher if your shoes are dirty? In today’s idiom, might this not
even strike at least some students all by itself as being, well, “hot"? In
any case, I’ve never forgotten this comment. Sometimes it represents to me
the only thing I’ve ever learned from reading my student evaluations. I took
it very personally once and I cherish it personally still.
Had it appeared on RATE, however, the comment would
feel very different. A RATE[D] professor is likely to feel like a contestant
on “American Idol,” standing there smiling while the results from the
viewing audience are totaled. What do any of them learn? Nothing, except
that everything from the peculiarities of their personalities to, ah, the
shine of their shoes, counts. But of course as professors we knew this
already. Didn’t we? Of course it might always be good to learn it all over
again. But not at a site where nobody’s particular class has any weight; not
in a medium in which everybody’s words float free; and not from students
whose comments guarantee nothing except their own anonymity. I’ll bet some
of them even wear dirty shoes.
July 28, 2006 reply from Alexander Robin A
[alexande.robi@UWLAX.EDU]
Two quotes from a couple of Bob Jensen's recent
posts:
"Of course we knew students are obsessed with
grades." (from the RateMyProfessors thread)
"The problem is that universities have explicit
or implicit rankings of "journal quality" that is largely dictated by
research faculty in those universities. These rankings are crucial to
promotion, tenure, and performance evaluation decisions." (from the TAR
thread)
These two issues are related. First, students are
obsessed with grades because universities, employers and just about everyone
else involved are obsessed with grades. One can also say that faculty are
obsessed with publications because so are those who decide their fates. In
these two areas of academia, the measurement has become more important than
the thing it was supposed to measure.
For the student, ideally the learning is the most
important outcome of a class and the grade is supposed to reflect how
successful the learning was. But the learning does not directly and tangibly
affect the student - the grade does. In my teaching experience students,
administrators and employers saw the grade as being the key outcome of a
class, not the learning.
Research publication is supposed to result from a
desire to communicate the results of research activity that the researcher
is very interested in. But, especially in business schools, this has been
turned on its head and the publication is most important and the research is
secondary - it's just a means to the publication, which is necessary for
tenure, etc.
It's really a pathetic situation in which the
ideals of learning and discovery are largely perverted. Had I fully
understood the magnitude of the problem, I would have never gone for a PhD
or gotten into teaching. As to what to do about it, I really don't know. The
problems are so deeply entrenched in academic culture. Finally I just gave
up and retired early hoping to do something useful for the rest of my
productive life.
Robin Alexander
Bob Jensen's threads on teaching evaluations are at
http://faculty.trinity.edu/rjensen/assess.htm#TeachingStyle
Bob Jensen's threads on teaching evaluations and learning styles are at
http://faculty.trinity.edu/rjensen/assess.htm#LearningStyles
Dumbing Education Down
President George W. Bush's signature education
reform -- the No Child Left Behind Act -- is coming in for a close inspection in
Congress. And, it seems, members on both sides of the aisle have plenty of ideas
of how to tinker with NCLB. But almost nobody is talking about the law's central
flaw: Its mandate that every American schoolchild must become "proficient" in
reading and math while not defining what "proficiency" is. The result of this
flaw is that we now have a patchwork of discrepant standards and expectations
that will, in fact, leave millions of kids behind, foster new (state-to-state)
inequities in education quality, and fail to give the United States the schools
it needs to compete globally in the 21st century . . . Meanwhile, the federal
mandate to produce 100% proficiency fosters low standards, game-playing by
states and districts, and cynicism and rear-end-covering by educators. Tinkering
with NCLB, as today's bills and plans would do, may ease some of the current
law's other problems. But until lawmakers muster the intestinal fortitude to go
after its central illusions, America's needed education makeover is not going to
occur.
Chester E. Finn Jr., "Dumbing Education Down, The Wall Street Journal,
October 5, 2007; Page A16 ---
Click Here
Mr. Finn is a senior fellow at Stanford's Hoover Institution and president of
the Thomas B. Fordham Institute.
NCLB = No Child Left Behind Law
A September 2007 Thomas B. Fordham Institute report
found NCLB's assessment system "slipshod" and characterized by "standards that
are discrepant state to state, subject to subject, and grade to grade." For
example, third graders scoring at the sixth percentile on Colorado's state
reading test are rated proficient. In South Carolina the third grade proficiency
cut-off is the sixtieth percentile.
Peter Berger, "Some Will Be Left
Behind," The Irascible Professor, November 10, 2007 ---
http://irascibleprofessor.com/comments-11-10-07.htm
"Beyond Merit Pay and Student Evaluations," by James D. Miller,
Inside Higher Ed, September 8, 2007 ---
http://www.insidehighered.com/views/2007/09/07/miller
What tools
should colleges use to reward excellent teachers? Some rely
on teaching evaluations that students spend only a few
minutes filling out. Others trust deans and department
chairs to put aside friendships and enmities and objectively
identify the best teachers. Still more colleges don’t reward
teaching excellence and hope that the lack of incentives
doesn’t diminish teaching quality.
I propose instead that
institutions should empower graduating seniors to reward teaching
excellence. Colleges should do this by giving each graduating senior $1,000
to distribute among their faculty. Colleges should have graduates use a
computer program to distribute their allocations anonymously.
My proposal would have
multiple benefits. It would reduce the tension between tenure and merit pay.
Tenure is supposed to insulate professors from retaliation for expressing
unpopular views in their scholarship. Many colleges, however, believe that
tenured professors don’t have sufficient incentives to work hard, so
colleges implement a merit pay system to reward excellence. Alas, merit pay
can be a tool that deans and department heads use to punish politically
unpopular professors. My proposal, however, provides for a type of merit pay
without giving deans and department heads any additional power over
instructors. And because the proposal imposes almost no additional
administrative costs on anyone, many deans and department heads might prefer
it to a traditional merit pay system.
Students, I suspect, would
take their distribution decisions far more seriously than they do
end-of-semester class evaluations. This is because students are never sure
how much influence class evaluations have on teachers’ careers, whereas the
link between their distributions and their favorite teachers’ welfare would
be clear. Basing merit pay on these distributions, therefore, will be
“fairer” than doing so based on class evaluations. Furthermore, these
distributions would provide very useful information to colleges in making
tenure decisions or determining whether to keep employing a non-tenure track
instructor.
The proposal would also
reward successful advising. A good adviser can make a student’s academic
career. But since advising quality is difficult to measure, colleges rarely
factor it into merit pay decisions. But I suspect that many students
consider their adviser to be their favorite professor, so great advisers
would be well rewarded if graduates distributed $1,000 among faculty.
Hopefully, these $1,000
distributions would get students into the habit of donating to their alma
maters. The distributions would show graduates the link between donating and
helping parts of the college that they really liked. Colleges could even ask
their graduates to “pay back” the $1,000 that they were allowed to give
their favorite teachers. To test whether the distributions really did
increase alumni giving, a college could randomly choose, say, 10 percent of
a graduating class for participation in my plan and then see if those
selected graduates did contribute more to the college.
My reward system would help
a college attract star teachers. Professors who know they often earn their
students adoration will eagerly join a college that lets students enrich
their favorite teachers.
Unfortunately, today many
star teachers are actually made worse off because of their popularity.
Students often spend much time talking to star teachers, make great use of
their office hours and frequently ask them to write letters of
recommendation. Consequently, star teachers have less time than average
faculty members do to conduct research. My proposal, though, would help
correct the time penalty that popularity so often imposes on the best
teachers.
College trustees and regents
who have business backgrounds should like my idea because it rewards
customer-oriented professors. And anything that could persuade trustees to
increase instructors’ compensation should be very popular among faculty.
But my proposal would be the
most popular among students. It would signal to students that the college is
ready to trust them with some responsibility for their alma mater’s
finances. It would also prove to students that the way they have been
treated at college is extremely important to their school.
James D. Miller is an associate professor of economics at Smith
College.
Jensen Comment
One-time "gifts" to teachers are not the same as salary increases that are
locked in year after year after year until the faculty member resigns or
retires. It is also extremely likely that this type of reward system might be
conducive to grade inflation popularity contests. Also some students might ask
why they are being charged $1,000 more in tuition to be doled out as bonuses
selectively to faculty.
But by far the biggest flaw in this type of reward system is the bias toward
large class sections. Some of the most brilliant research professors teach
advanced-level courses to much smaller classes than instructors teaching larger
classes to first and second year students. Is it a good idea for a top
specialist to abandon his advanced specialty courses for majors in order to have
greater financial rewards for teaching basic courses that have more students at
a very elementary level?
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Question
Guess which parents most strongly object to grade inflation?
Hint: Parents Say Schools Game System, Let Kids Graduate Without Skills
The Bredemeyers represent a new voice in special
education: parents disappointed not because their children are failing, but
because they're passing without learning. These families complain that schools
give their children an easy academic ride through regular-education classes,
undermining a new era of higher expectations for the 14% of U.S. students who
are in special education. Years ago, schools assumed that students with
disabilities would lag behind their non-disabled peers. They often were taught
in separate buildings and left out of standardized testing. But a combination of
two federal laws, adopted a quarter-century apart, have made it national policy
to hold almost all children with disabilities to the same academic standards as
other students.
John Hechinger and Daniel Golden, "Extra Help: When Special Education Goes
Too Easy on Students," The Wall Street Journal, August 21, 2007, Page A1
---
http://online.wsj.com/article/SB118763976794303235.html?mod=todays_us_page_one
Bob Jensen's fraud updates are at
http://faculty.trinity.edu/rjensen/FraudUpdates.htm
A Compelling Case for Reforming the Current Teaching Evaluation Process
"Bias, the Brain, and Student Evaluations of Teaching," by Debrorah
Jones Merritt, Ohio State University College of Law, SSRN, January 2007
---
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=963196
Student evaluations of teaching are a common
fixture at American law schools, but they harbor surprising biases.
Extensive psychology research demonstrates that these assessments respond
overwhelmingly to a professor's appearance and nonverbal behavior; ratings
based on just thirty seconds of silent videotape correlate strongly with
end-of-semester evaluations. The nonverbal behaviors that influence teaching
evaluations are rooted in physiology, culture, and habit, allowing
characteristics like race and gender to affect evaluations. The current
process of gathering evaluations, moreover, allows social stereotypes to
filter students' perceptions, increasing risks of bias. These distortions
are inevitable products of the intuitive, “system one” cognitive processes
that the present process taps. The cure for these biases requires schools to
design new student evaluation systems, such as ones based on facilitated
group discussion, that enable more reflective, deliberative judgments. This
article, which will appear in the Winter 2007 issue of the St. John's Law
Review, draws upon research in cognitive decision making, both to present
the compelling case for reforming the current system of evaluating classroom
performance and to illuminate the cognitive processes that underlie many
facets of the legal system.
Measuring Teacher Effectiveness
RAND Corporation: Measuring Teacher Effectiveness ---
http://www.rand.org/education/projects/measuring-teacher-effectiveness.html
Explore the Measuring Teacher Effectiveness Fact Sheet Series Teachers
Matter: Understanding Teachers' Impact on Student Achievement
Research suggests that, among school-related factors, teachers matter
most when it comes to a student's academic performance. Nonschool
factors do influence student achievement, but effective teaching has the
potential to help level the playing field.
Multiple Choices: Options for Measuring Teaching Effectiveness
Teaching is a complex activity that should be measured with multiple
methods. Some examine teachers' practices directly, while others
emphasize student outcomes. Each method has trade-offs, and no single
method provides a complete picture of a teacher's effectiveness.
Tests and the Teacher: What Student Achievement Tests Do—and Don't—Tell
Us About Teacher Effectiveness
In addition to helping students learn reading and math, we also trust
teachers to teach students to think, reason, and work cooperatively with
one another. Students' scores on achievement tests tell us something—but
by no means everything—about how well teachers are meeting these
expectations.
Value-Added Modeling 101: Using Student Test Scores to Help Measure
Teaching Effectiveness
Value-added models, or VAMs, attempt to measure a teacher's impact on
student achievement apart from other factors, such as individual
ability, family environment, past schooling, and the influence of peers.
Value-added estimates enable relative judgments but are not absolute
indicators of effectiveness.
Student Growth Percentiles 101: Using Relative Ranks in Student Test
Scores to Help Measure Teaching Effectiveness
Student growth percentiles, or SGPs, provide a simple way of
comparing the improvement of one teacher's students at the end of the
year with the improvement of other students who started the year at the
same level.
Altmethrics
"Scholars Seek Better Ways to Track Impact Online," by Jennifer Howard,
Chronicle of Higher Education, January 29, 2012 ---
http://chronicle.com/article/As-Scholarship-Goes-Digital/130482/?sid=wc&utm_source=wc&utm_medium=en
In academe, the game of how to win friends and
influence people is serious business. Administrators and grant makers want
proof that a researcher's work has life beyond the library or the lab.
But the current system of measuring scholarly
influence doesn't reflect the way many researchers work in an environment
driven more and more by the social Web. Research that used to take months or
years to reach readers can now find them almost instantly via blogs and
Twitter.
That kind of activity escapes traditional metrics
like the impact factor, which indicates how often a journal is cited, not
how its articles are really being consumed by readers.
An approach called altmetrics—short for alternative
metrics—aims to measure Web-driven scholarly interactions, such as how often
research is tweeted, blogged about, or bookmarked. "There's a gold mine of
data that hasn't been harnessed yet about impact outside the traditional
citation-based impact," says Dario Taraborelli, a senior research analyst
with the Strategy Team at the Wikimedia Foundation and a proponent of the
idea.
Interest in altmetrics is on the rise, but it's not
quite right to call it a movement. The approach could better be described as
a sprawling constellation of projects and like-minded people working at
research institutions, libraries, and publishers.
They've been talking on Twitter (marking their
messages with the #altmetrics hashtag), sharing resources and tools online,
and developing ideas at occasional workshops and symposia. They're united by
the idea that "metrics based on a diverse set of social sources could yield
broader, richer, and timelier assessments of current and potential scholarly
impact," as a call for contributions to a forthcoming altmetrics essay
collection puts it.
Jason Priem, a third-year graduate student at the
School of Information and Library Science at the University of North
Carolina at Chapel Hill, is a leader in this push to track impact via the
social Web. Scholarly workflows are moving online, leaving traces that can
be documented—not just in articles but on social networks and reference
sites such as Mendeley and Zotero, where researchers store and annotate
scholarship of interest. "It's like we have a fresh snowfall across this
docu-plain, and we have fresh footprints everywhere," he says. "That has the
potential to really revolutionize how we measure impact."
Mr. Priem helped write a manifesto, posted on the
Web site altmetrics.org, which articulates the problems with traditional
evaluation schemes. "As the volume of academic literature explodes, scholars
rely on filters to select the most relevant and significant sources from the
rest," the manifesto argues. "Unfortunately, scholarship's three main
filters for importance are failing."
Peer review "has served scholarship well" but has
become slow and unwieldy and rewards conventional thinking.
Citation-counting measures such as the h-index take too long to accumulate.
And the impact factor of journals gets misapplied as a way to assess an
individual researcher's performance, which it wasn't designed to do.
"I'm not down on citations," Mr. Priem says. "I'm
just saying it's only part of the story. It's become the only part of the
story we care about."
That's where altmetrics comes in. It's a way to
measure the "downstream use" of research, says Cameron Neylon, a senior
scientist at Britain's Science and Technology Facilities Council, and
another contributor to the manifesto. Any system that turns out to be a
useful way to measure influence will tempt the unscrupulous to try and game
it, though. One concern is that someone could build a program, for instance,
that would keep tweeting links to an article and inflate its altmetrics
numbers. Devising a Method
So how do you reliably measure fluid, fast-paced,
Web-based, nonhierarchical reactions to scholarly work? That problem has
been keeping Mr. Priem busy. He's part of the team that designed an
altmetrics project called Total-Impact.
Researchers can go to the site and enter many forms
of research, including blog posts, articles, data sets, and software they've
written. Then the Total-Impact application will search the Internet for
downloads, Twitter links, mentions in open-source software libraries, and
other indicators that the work is being noticed. "We go out on the Web and
find every sort of impact and present them to the user," Mr. Priem explains.
When possible, they gather data directly from services' open-application
programming interfaces, or API's.
These are very early days for Total-Impact, and
there's a lot of information it doesn't gather yet. For instance, right now
it only searches blogs indexed by the site Research Blogging. That "amounts
to a very small subset of science blogs," according to Mr. Priem, who adds
that most of the other metrics are more robust.
Continued in article
Jensen Comment
When I look at my own teaching, research, and service record over 40+ years, I
sometimes wonder where and how I've made my major "impact" to date ---
http://faculty.trinity.edu/rjensen/Resume.htm
It's a no-brainer to conclude that virtually all instructors have impacts on
students both in teaching and in counseling/advising. However, since over half
my career was spent on the faculty of small universities, the numbers of
students that I've impacted is relatively small.
I've had successes in research and publication, but I don't think these are
what people to day think of if they know anything about the retired accounting
professor named Bob Jensen. I was honored to be invited to a think tank for two
years where I mingled with better thinkers than me, including two with Nobel
Prizes in science.
Service is a very broad concept that includes both service to my employer and
service to my profession in the form of committee appointments, serving as an
officer, and making presentations at over 350 campuses and conferences. These
probably did the most in the early days of education technology to further my
career and reputation.
However, as I look back upon everything I've accomplished, I think my Website
has had the greatest "impact" in the context of the above article ---
http://faculty.trinity.edu/rjensen/
Website success, however, has been interactive with messaging on a listserv
known as the AECM. I've undoubtedly sent more messages to the AECM than any
other subscriber, but I've also probably benefitted more than any other
subscriber in terms of posting messages for AECM scholars on my Website. Thus I
alone will never take the credit for "my" Website. What I might call "my"
Website is really the product of tens of thousands of messages that I sometimes
did little more than archive the messages of others at :"my" Website. I've made
comments favorable and critical of most of those messages of others, but the
genuine value of the modules in question came from the outside.
My point here is that it's very difficult to assess "my" impact when much of
what I've done is make it easier for the world to find my archives of the works
of others. It's virtually impossible to partition what impacts are "mine" and
what impacts I've merely "archived."
But that's the way scholarship and research work in academe. There are very
few discipline-shaking seminal contributions, although there are many smaller
seminal contributions that build on previous (hopefully always cited) previous
knowledge.
What the digital age has provided us are the tools for more efficiently
finding the knowledge to date on each given topic and to communicate the small
tidbits we try to add to that knowledge.
The "impact" of each tidbit is tiny, tiny, tiny. But like grains of sand the
tidbits pile on top of other tidbits until there's a heap of knowledge upon
which to judge a scholar. However, because many of these tidbits are
combinations of my own contributions mixed with the contributions of others, it
becomes very difficult to take credit for the aggregate "impact" that is
"mine." But then who really cares whether the impact is "mine" or not. The
important thing is that impact of each tidbit in what becomes a mountain of
tidbits of scholarship heaped higher and higher by tidbits added one grain at a
time.
As written above:
These are very early days for Total-Impact, and
there's a lot of information it doesn't gather yet. For instance, right now
it only searches blogs indexed by the site Research Blogging. That "amounts
to a very small subset of science blogs," according to Mr. Priem, who adds
that most of the other metrics are more robust.
It will never be possible in knowledge sharing to measure "my" Total-Impact.
Superficial measures like numbers of hits on "my" Website or number of citations
of my published research are meaningless since we have no way of assessing the
ultimate value of one hit versus the value of any one of a million other hits.
And the real value of my work may still lie in the future when scholars not yet
born discover my works.
At this stage of my life in retirement it does not really matter to me what
my score is on Total-Impact. What matters most is that I played a part, however
large or small, in the accumulation of knowledge in my chosen specialties. Put
another way, I don't much care about my "altmetrics." A note of appreciation
from a friend or a total stranger means much more to me. And I appreciate it
when others are critical of selected tidbits I've archived. The fact that it was
worthwhile for them to take the time to criticize my work is a backhanded
compliment. I truly do love to debate controversial issues.
Edutopia: Assessment (a broader look at education assessment) ---
http://www.edutopia.org/assessment
Look beyond high-stakes testing to learn about
different ways of assessing the full range of student ability -- social,
emotional, and academic achievement.
Coaches Graham and Gazowski
Question
Why are there so few, if any left like Coach Gazowski?
"Accounting Degrees Up 19 Percent: AICPA Report," SmartPros,
May 6, 2008 ---
http://accounting.smartpros.com/x61772.xml
The American Institute of CPAs announced that more than 64,000 students
graduated with bachelor's and master's degrees in accounting in the 2006-07
school year, a 19 percent increase since the 2003-04 school year, when the
AICPA last surveyed this data.
At the same
time, over 203,000 students enrolled in accounting programs at both the
undergraduate and graduate levels. This also represents a 19 percent
increase since 2004, according to the AICPA study,
2008 Trends in the Supply of Accounting Graduates and the Demand for Public
Accounting Recruits. The gender ratio of
graduates is fairly close at 52 percent female and 48 percent male.
"The years in
the aftermath of
Sarbanes-Oxley have spotlighted the critical role
the accounting profession plays in our capital market system," said Denny
Reigle, AICPA director – academic and career development. "One fortunate
result of SOX was greater interest in accounting on the part of students, as
this report attests."
The demands of
Sarbanes-Oxley legislation likewise have led to substantial hiring increases
by public accounting firms, the primary employers of new graduates. The
AICPA report reveals that hiring by firms in 2006-07 shot up 83 percent over
the previous three years. Sixty-seven percent of the firms that responded to
the survey anticipate continued growth in hiring.
This is the largest
number of graduates in the 36 years the AICPA has been tracking this data.
Jensen Comment
What I find most interesting is that, while celebrating the
post-SOX surge in the number of accounting graduates, we're reminded that we
still produced more accountants when the
Dow index was under $2,000, the AACSB was strict on standards, the largest
CPA firms were mostly national instead of international, and the office space
required for the largest CPA firms in any city was less than 10% of what it is
today. A much higher proportion of our graduates in those days ended up working
for smaller CPA firms or business firms. Four decades ago client-firm executives
were less inclined to seek out creative accounting to pad their stock options
since their pay was reasonable and not so contractually tied to earnings
numbers.
Historical cost ala Payton and Littleton ruled the accounting world with
underlying concepts such as the matching principle. Audit trails did not
disappear inside computers or the Cayman Islands. Substantive tests reined
supreme in auditing.
Judging from the adverse
PCAOB oversight reports of audits in the past couple years, I think the
auditing firms were more professional four decades ago and were less inclined to
cut corners due to budget overruns and staff shortages. This is only my
subjective opinion based upon my very limited career as a real-world auditor
with flying fingers on a 10-key adding machine. We actually trudged down to
Pueblo, Colorado to count pistons on Sundays and waded through the muck in
Montford's feed lots in Greeley in order to estimate the amount of piled up
manure inventory.
Students today have never seen one of those
typewriter-sized calculators with the moving bar that ratcheted back and
forth sort of on its own after being given a calculation to perform.
Four decades ago the CPA exam was narrow and deep compared with with shallow
and wide today when we have so many more complicated standards that are barely
touched on the CPA exam. I think the first-time passage rate has remained pretty
stable (15%-25%) over the years such that somebody must be controlling the
faucet.
We had one woman in the Denver office of Ernst & Ernst, who did tax returns
in the back office amidst a cloud of cigarette smoke. Emma was rarely, if ever,
allowed to see a client. Those were not the good old days in many respects. Even
though we produced more accounting graduates in four decades ago, they were
mostly white males. Women graduates were supposed to be K-12 teachers and nurses
rather than doctors, lawyers, CEOs, CFOs, and accountants. Hispanics and blacks
rarely had opportunities to attend college. Many of our attitudes about fairness
and opportunity have changed for the good. But many of our attitudes about
compensation, life style, families, divorce, drugs, plagiarism/cheating, and
work have changed for the bad.
A C-grade was actually considered the median grade in college four decades
ago ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Accounting graduates did not have to have all A or B+ grades to be interviewed
for jobs.
Our teachers were not denied tenure because they were scholars rather than
researchers. Even if they were tough as nails and piled the work over our heads
in courses, they could still get tenure, respect, and pay raises. Most of the
professors I look back on with admiration, in retrospect, would be un-tenurable
today because they devoted too much time to their craft and scared the bejeebers
out of us. I can just imagine the cursing words that would be written about them
if we had RATE-MY-PROFESSOR in those days ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#RateMyProfessor
But then again maybe the cursing words would not have flowed because I think we
had more respect for our teachers in those days.
"How Great Coaches Ask, Listen, and Empathize," by Ed Batista,
Harvard Business Review Blog, February 18, 2015 ---
Click Here
https://hbr.org/2015/02/how-great-coaches-ask-listen-and-empathize?utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date&cm_lm=rjensen%40trinity.edu&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&referral=00563&cm_ven=Spop-Email&cm_ite=DailyAlert-021915+%281%29
. . .
Coaching is about connecting with people, inspiring
them to do their best, and helping them to grow. It’s also about challenging
people to come up with the answers they require on their own. Coaching is
far from an exact science, and all leaders have to develop their own style,
but we can break down the process into practices that any manager will need
to explore and understand. Here are the three most important:
Ask
Coaching begins by creating space to be filled by
the employee, and typically you start this process by asking an open-ended
question. After some initial small talk with my clients and students, I
usually signal the beginning of our coaching conversation by asking, “So,
where would you like to start?” The key is to establish receptivity to
whatever the other person needs to discuss, and to avoid presumptions that
unnecessarily limit the conversation. As a manager you may well want to set
some limits to the conversation (“I’m not prepared to talk about the budget
today.”) or at least ensure that the agenda reflects your needs (“I’d like
to discuss last week’s meeting, in addition to what’s on your list.”), but
it’s important to do only as much of this as necessary and to leave room for
your employee to raise concerns and issues that are important to them. It’s
all too easy for leaders to inadvertantly send signals that prevent
employees from raising issues, so make it clear that their agenda matters.
In his book
Helping, former MIT professor Edgar Schein
identifies different modes of inquiry that we employ when we’re offering
help, and they map particularly well to coaching conversations. The initial
process of information gathering I described above is what Schein calls
“pure inquiry.” The next step is “diagnostic inquiry,” which consists of
focusing the other person’s attention on specific aspects of their story,
such as feelings and reactions, underlying causes or motives, or actions
taken or contemplated. (“You seem frustrated with Chris. How’s that
relationship going?” or “It sounds like there’s been some tension on your
team. What do you think is happening?” or “That’s an ambitious goal for that
project. How are you planning to get there?”)
The next step in the process is what Schein
somewhat confusingly calls “confrontational inquiry”. He doesn’t mean that
we literally confront the person, but, rather, that we challenge aspects of
their story by introducing new ideas and hypotheses, substituting our
understanding of the situation for the other person’s. (“You’ve been talking
about Chris’s shortcomings. How might you be contributing to the problem?”
or “I understand that your team’s been under a lot of stress. How has
turnover affected their ability to collaborate?” or “That’s an exciting
plan, but it has a lot of moving parts. What happens if you’re behind
schedule?”)
Continued in article
Jensen Comment
Sometimes coaches seem mean when they care the most. Sometimes teachers are more
demanding when they care the most. Do the best players and the best scholars and
the best workers come from Lake Wobegon where coaches and teachers coddle
everybody along?
I don't think so. Empathy is one thing. Giving every competitor the same blue
ribbon is probably dysfunctional after the age of 10.
http://en.wikipedia.org/wiki/Lake_Wobegon
Grade inflation in both K-12 schools and colleges is a disaster these days when
an A- grade is the median grade more often than not for the wrong reasons (so
teachers can get higher evaluations from students) ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Randy Pausch said it very well when he wrote about his tough old football
coach, Coach Graham, in Chapter Seven of The Last Lecture (Hyperion
Books, 2008, IABN 978-1-4013-2325-7).
. . . one of the assistant
coaches came over to reassure me. "Coach Graham rode you pretty hard ,
didn't he?" he said.
I could barely muster a "yeah."
"That's a good thing," the assistant told me. "When
you're screwing up and nobody says anything to you anymore, that means
they've given up on you."
. . .
There's a lot of talk these days about giving
children self-esteem. It's not something you can give; it's
something they have to build. Coach Graham worked in a no-coddling zone.
Self-esteem? He knew there was really only one way to teach kids how to
develop it: You give them something they can't do, they work hard until
they find they can do it, and your just keep repeating the process.
When Coach Graham first got hold of me, I was this
wimpy kid with no skills, no physical strength, and no conditioning. But he
made me realize that if I work hard enough, there will be things I can do
tomorrow that I can't do today. Even now, having just turned forty-seven, I
can give you a three point stance that any NFL lineman would be proud of.
I realize that, these days, a guy like Coach Graham
might get thrown out of a youth sports league. He'd be too tough. Parents
would complain.
I remember one game when our team was playing
terribly. At halftime, in our rush for water, we almost knocked over the
water bucket. Coach Graham was livid: "Jeez! That's the most I've seen you
boys move since this game started!" We were eleven years old, just standing
there, afraid he'd pick us up one by one and break us with his bare hands.
"Water?" he barked. "You boys want water?" He lifted the bucket and dumped
all the water on the ground.
. . .
It saddens me that many kids today are so coddled.
I think back to how I felt during that halftime rant. Yes, I was thirsty.
But more than that, I felt humiliated. We had all let down Coach Graham, and
he let us know it in a way we'd never forget. He was right.
. . .
I haven't seen Coach Graham since I was a teen, but
he just keeps showing up in my head, forcing me to work harder whenever I
feel like quitting, forcing me to be better. He gave me a feedback loop for
life.
Bob Jensen's football coach would've viewed Coach Graham as a wimp. My Algona
High School coach's name was "The" Coach Tony Gazowski. Tony grew up Polish and
tough in the shadows of the steel mills in Pittsburgh. He became an
"All-Big-Ten" defensive end at the University of Iowa and never did catch on
that later in life he was a football coach and not a Marine drill instructor (he
was also a former Marine sergeant). Coach Gazowski did for me what Coach Graham
did for Randy, but Coach Gazowski sometimes went a bit too far in urging us to
play a bit rougher than the rules allowed if we thought we could get away with
it. This might be a good thing to do on a wartime battlefield, but it's not
something I recommend in athletics and most other aspects of life.
You can read more about Randy and find the link to the video of his "Last
Lecture" and commentaries that followed at
http://faculty.trinity.edu/rjensen/tidbits/2008/tidbits080415.htm
Grade Inflation and Teaching Evaluations
Statement Against Student Evaluations for Promotion and Tenure Decisions
(American Sociological Association) ---
https://www.asanet.org/sites/default/files/asa_statement_on_student_evaluations_of_teaching_sept52019.pdf
Jensen Comment
They fail to mention my main objection student evaluations --- the disgrace of
grade inflation bringing the median grades up to A- across the USA ---
See Below
How to Mislead With Statistics
New York algebra fact of the
day ---
https://marginalrevolution.com/marginalrevolution/2020/06/new-york-fact-of-the-day-2.html
Take
here in New York, where in 2016 the passing rate for the Regents Examination
in Algebra I test was 72 percent. Unfortunately, this (relatively) higher
rate of success does not indicate some sort of revolutionary pedagogy on the
part of New York state educators. As the New York Post complained in 2017,
passing rates were so high in large measure because the cutoff for passing
was absurdly low — so low that students needed only to answer 31.4 percent
of the questions correctly to pass the 2017 exam.
Walter A. Williams: The Nation's Report Card
How are K-12 schools doing under President Trump versus
President Obama?
https://townhall.com/columnists/walterewilliams/2020/05/06/the-nations-report-card-n2568167?utm_source=thdaily&utm_medium=email&utm_campaign=nl&newsletterad=05/06/2020&bcid=b16c6f948f297f77432f990d4411617f&recip=17935167
Jensen's Comment
Most K-12 schools were probably doing better when I was a child than they're
doing today. The downhill slide is greatest in the gang-ridden schools,
drug-infested urban schools like Chicago and New Orleans. Throwing money at such
schools is not the answer until life at home recovers. Finland
knows this, which is why Finland's dads spend more time with school children
than the moms or the teachers.
https://www.theguardian.com/lifeandstyle/2017/dec/04/finland-only-country-world-dad-more-time-kids-moms
Academy's Disgrace: Despite university faculty’s efforts to maintain
rigor and high expectations in their classrooms, grade inflation continues to
rise ---
https://taxprof.typepad.com/taxprof_blog/2020/06/resisting-the-detrimental-effects-of-grade-inflation-on-faculty-and-students.html
Video:
Bing Crosby and Ingrid Bergman argue theory of education ---
https://www.youtube.com/watch?v=7MctafuXLho
The age-old question of when and how to assign low grades. Ingrid did not ask
when to pass brain surgeons or bridge designers versus poets and artists. Today
we might ask about what to do with police academy students who do not take
learning about race relations content seriously? Ingrid also did not ask the
question of whether grades motivate learners or whether a majority of students
taking courses for pass-fail grades learn as much course content as learners
getting grades A,B,C,D, or F. There's also a huge issue of how to deal with
students who do not even try to learn in course content, an issue dealt with
somewhat in the above video.
UCLA
Reinstates Prof Suspended for Refusing to Change Exam or Grades for Black
Students ---
https://legalinsurrection.com/2020/09/ucla-reinstates-prof-suspended-for-refusing-to-change-exam-or-grades-for-black-students/
Eliminating the Grading
System in College: The Pros and Cons
https://thebestschools.org/magazine/eliminating-grading-system-college-pros-cons/
The Disgrace of Grade
Inflation in North America ---
Scroll down
Chronicle of Higher Education
Data-Visualization And Student Evaluations: Male Profs Are Brilliant And
Funny; Female Profs Are Mean And Rude ---
https://taxprof.typepad.com/taxprof_blog/2019/11/data-visualization-and-student-evaluations-male-profs-are-brilliant-and-funny-female-profs-are-mean-.html
Jensen Comment
These and other conclusions are reached after an analysis of millions of course
evaluations on RateMyProfessors.com. Keep in mind that course evaluations are
self selecting on RMP and accordingly do not meet the criteria for statistical
analysis. However, the volume of such samples makes them somewhat informative. I
always ignore the numbers and read the subjective comments for insights.
The above study looks at various other aspects of courses other than teacher
genders.
Given my own rather extensive experiences reading RMP course evaluations I find
it's a mistake to think that most of the responders are disgruntled students.
Although there are clearly a lot of disgruntled students, it seems to me that
most evaluations are positive rather than negative --- possibly
meaning that teachers who suspect they will get positive
evaluations may prompt students to submit evaluations to RMP. I doubt that any
teacher who anticipates negative submissions ever mentions RMP. This
biases the millions of RMP submissions to be more positive than negative.
I was glad to see that RMP dropped its "Red Hot Chili Pepper" competition
that attempted to identify the most popular college teachers in the USA with
pictures of red hot peppers. This encouraged popular teachers to promote their
students to send in RMP evaluations.
The bottom line is that the numerical evaluations don't mean much on RMP due
to self-selecting samples. However, I find that the subjective comments do
provide some information about course difficulty, teaching style, and course
rigor.
Based on existing research, “the strongest predictor of
(student) evaluations is grade expectations,” he said ---
https://www.insidehighered.com/news/2019/12/09/study-attempts-debunk-criticisms-student-evaluations-teaching?utm_source=Inside+Higher+Ed&utm_campaign=bd09d7a331-DNU_2019_COPY_01&utm_medium=email&utm_term=0_1fcbc04421-bd09d7a331-197565045&mc_cid=bd09d7a331&mc_eid=1e78f7c952
Jensen Comment
The results are consistent with RateMyProfessors.com millions of evaluations
where the highest evaluations tend to go to easy graders.
It's never been easier to
get an A at Princeton ---
https://taxprof.typepad.com/taxprof_blog/2020/01/the-decline-and-fall-of-grade-deflation-at-princeton.html
A’s
were the most common grade in all academic divisions. Over two-thirds of the
humanities’
grades were in the A-range — C’s were
virtually nonexistent — versus 46 percent
in the natural sciences.
Although engineering and the natural sciences graded harsher, students were
about twice as likely to earn an A+ in them as their classmates in other
divisions. Unlike generic A’s, professors must file a special statement
explaining why they’re giving an A+. Both are worth 4.0 points on the GPA
scale.
Continued in article
Jensen Comment
Note the graphs comparing Humanities (highest gpa averages), Social Sciences,
Engineering, and Natural Sciences (lowest gpa averages slightly under 3.4)
The Top 20 (actually 19)
Universities with the Highest Average GPAs ---
https://ripplematch.com/journal/article/the-top-20-universities-with-the-highest-average-gpas-84ef5edf/
With Ivy League universities making up 4 of the 5 universities with the
highest average GPAs, it’s clear there’s an issue with grade inflation among
some of America’s most elite institutions. The only Ivy League absent from
our full top 20 list is
Princeton University,
which once had
concrete policies
(subsequently abandoned) in place to
prevent grade inflation. Whether or not you use GPA as a way of evaluating
entry-level candidates, it’s important to understand that every university
has a different way of grading, and not all GPAs are created equal.
Evaluating candidates holistically – think
experience, skills, university AND GPA
–
will garner a better set of candidates than screening on GPA alone.
1. Brown University – 3.71
Brown University –
which is known for its relaxed grading system
– once again takes the top spot with an average GPA of 3.71. As reported
last year, Brown’s grading system does not record failing grades and there’s
no such grade as a “D”, leaving A’s, B’s, and C’s as the only grading option
for students. According to
Brown’s website, the de-emphasis on
grades is intentional, and was implemented to encourage students to explore
the academic curriculum widely.
2. Stanford University – 3.66
Similar to our
2018 report, Stanford ranks as No. 2
on our list with its average GPA of 3.66. The grading system remains
unchanged from the previous year, which notably, gives students a shot at
a 4.3 GPA if they receive an A+ in a
class.
3. Harvard University – 3.64
Following closely behind
Stanford University, the self-reported average GPA at Harvard University is
3.64, or hovering around A-. It seems that
not much has changed since 2015, when
a survey of graduating seniors published in the Harvard Crimson reported the
average GPA at 3.64. That same survey found that, despite the frequency of
high GPAs, 72% of students surveyed didn’t think grade inflation at Harvard
was much of a problem at all.
4. Yale University – 3.62
According to a 2017 Yale News article,
92% of Yale faculty believe there is grade inflation at Yale. The article
also points out that while Yale does not publicly release GPA data, Yale
News estimated that around 30% of students graduated with an A- GPA or above
based on the percentage of students that graduated Cum Laude. With an
average GPA of 3.62 reported by the Yale students that use our platform,
that sounds about right.
5. Columbia University – 3.59
Similar to Stanford
University, an A+ at Columbia University gives students a 4.33 GPA,
according to the grading policy listed on the
university’s website. With Columbia
University’s average GPA as a 3.59 for undergraduate students, the Ivy
League made it into our top 5 despite
steps taken by the university to address the
high concentration of students that receive A’s.
With Ivy League
universities making up 4 of the 5 universities with the highest average
GPAs, it’s clear there’s an issue with grade inflation among some of
America’s most elite institutions. The only Ivy League absent from our full
top 20 list is Princeton University, which once had
concrete policies in place to prevent
grade inflation. Whether or not you use GPA as a way of evaluating
entry-level candidates, it’s important to understand that every university
has a different way of grading, and not all GPAs are created equal.
Evaluating candidates holistically – think
experience, skills, university AND GPA
– will garner a better set of candidates than screening on GPA alone.
See the rest of the top
20 universities with the highest average GPAs below:
6. Vanderbilt University – 3.57
7. Duke University – 3.56
7. Baylor University – 3.56
9. Northeastern University – 3.55
10. Dartmouth College – 3.54
10. Barnard College – 3.54
12. Amherst College – 3.53
12. Rice University – 3.53
12. The University of Pennsylvania – 3.53
15. Washington University in St. Louis – 3.52
15. Northwestern University – 3.52
15. Johns Hopkins University – 3.52
18. Stevens Institute of Technology – 3.51
19. Cornell University – 3.5
19. University of Notre Dame – 3.5
Jensen Comment
At 3.63 Princeton would've ended up at
Rank 4 if Princeton had not been removed from the above ranking.
When universities are extremely
hard to get into, one argument for grade inflation is that it's unfair in
Lake Wobegon to give grades of C or below. Arguments against this
excuse for grade inflation in Lake Wobegon is that even at Princeton and the
other prestigious universities some students are markedly better than other
students. The best students complain that it's
impossible in Lake Wobegon to prove they are better than their peers.
An argument against grade
inflation in Lake Wobegon is that theirs not a whole
lot of incentive to try hard to be better than average if the average
is the highest grade that can be earned in a course. My guess is that students
who experience blood, sweat, and tears for grades in Lake Wobegon are those that
face stiff competition in external competency examinations such as the
competition faced by pre-med students when taking the MCAT in stiff competition
to get into medical school. Engineers face licensing examinations. And if you're
trying to get into graduate schools there are GRE, LSAT, GMAT, and other
competitive examinations.
When nobody gets a C grade it's
a lot easier for faculty to assign grades. Grades D and F are for outlier
students who are usually easy to identify. C grades used to be given for average
performance, including students who did not perform so well but really, really
tried. When there are no C grades those students who did not perform well but
really, really tried get B grades.
I think a lot of the blame for
grade inflation across the USA is heavily due to having student evaluations
affect tenure and other performance decisions of teachers. Students now have
blackmail nooses that they hold over the heads of their teachers ---
See below
The Atlantic: Has College Gotten Too Easy? Time spent studying is
down, but GPAs are up --
-
https://www.theatlantic.com/education/archive/2019/07/has-college-gotten-easier/594550/
Jensen Comment
In eight decades the median grade across the USA went from C+ to A- (with
variations of course) and efforts in such places as Princeton and Cornell to
limit the proportion of A grades were ended and deemed as failures.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Now we ask: Has college gotten to easy. I guess you know what I think.
Higher education has become Lake Wobegon where (almost) all students are
above average in terms of what used to be average.
Especially note the grade inflation
graphs at
www.Gradeinflation.com
The Atlantic: Has College Gotten Too Easy? Time spent studying is
down, but GPAs are up ---
https://www.theatlantic.com/education/archive/2019/07/has-college-gotten-easier/594550/
Jensen Comment
In eight decades the median grade across the USA went from C+ to A- (with
variations of course) and efforts in such places as Princeton and Cornell to
limit the proportion of A grades were ended and deemed as failures.
See below!
Now we ask: Has college gotten to easy. I guess you know what I think.
Higher education has become Lake Wobegon where (almost) all students are
above average in terms of what used to be average.
Intelligence and educational achievement ---
https://www.sciencedirect.com/science/article/abs/pii/S0160289606000171
This 5-year prospective longitudinal study of
70,000 + English children examined the association between psychometric
intelligence at age 11 years and educational achievement in national
examinations in 25 academic subjects at age 16. The correlation between a
latent intelligence trait (Spearman's g froK=12m CAT2E) and a latent trait
of educational achievement (GCSE scores) was 0.81. General intelligence
contributed to success on all 25 subjects. Variance accounted for ranged
from 58.6% in Mathematics and 48% in English to 18.1% in Art and Design.
Girls showed no advantage in g, but performed significantly better on all
subjects except Physics. This was not due to their better verbal ability. At
age 16, obtaining five or more GCSEs at grades A⁎–C is an important
criterion. 61% of girls and 50% of boys achieved this. For those at the mean
level of g at age 11, 58% achieved this; a standard deviation increase or
decrease in g altered the values to 91% and 16%, respectively.
Jensen Comment
There was a time when grades might have been competitive predictors of
educational achievement in the USA but then grade inflation ruined both
the predictors and the criterion of educational achievement ---
Grade Inflation in High Schools 2005-2018 ---
https://fordhaminstitute.org/sites/default/files/20180919-grade-inflation-high-schools-2005-2016_0.pdf
Also see
https://www.k12academics.com/education-issues/grade-inflation
Grade inflation is the
increase in the number of high grades over time. Grade inflation is often
conflated with lax academic standards. For example, the following quote
about lax standards from a Harvard University report in 1894 has been used
to claim that grade inflation has been a longstanding issue: "Grades A and B
are sometimes given too readily ... insincere students gain passable grades
by sham work." Issues of standards in American education have been
longstanding. However, rising grades did not become a major issue in
American education until the 1960s.
The evidence for grade
inflation in the US was sparse, largely anecdotal and sometimes
contradictory until recently. Hard data were not abundant. A Stanford
University report in the 1990s showed that grades had been rising since the
1960s; in an effort to stem grade inflation, Stanford changed its grading
practices slightly. National surveys in the 1990s generally showed rising
grades at American colleges and universities, but a survey of college
transcripts by a senior research analyst in the US Department of Education
found that grades declined slightly in the 1970s and 1980s. Data for
American high schools were lacking.
However, recent data leave little doubt that grades are rising at American
colleges, universities and high schools.
Leaders from number of institutions, including Harvard University and
Princeton University, have publicly stated that grades have been rising and
have made efforts to change grading practices. An evaluation of grading
practices in US colleges and universities written in 2003, shows that since
the 1960s, grades in the US have risen at a rate of 0.15 per decade on a 4.0
scale. The study included over 80 institutions with a combined enrollment of
over 1,000,000 students. An annual national survey of college freshmen
indicates that students are studying less in high school, yet an increasing
number report high school grades of A- or better.
The debate on grade
inflation has moved from assessment to causes. Are grades rising because
standards are being lowered or because students are producing better work?
Grade inflation is highly correlated the timing when student evaluations of
teachers commenced to seriously impact tenure, promotion, and pay of teachers.
Efforts to limit granting of A grades at places like Cornell and Princeton
were deemed failures.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
It didn't help when RateMyProfessors.com commenced to post millions of student
evaluations of named teachers online for the world to see
https://www.ratemyprofessors.com/
The top college teachers at the above site tend to be rated as "easy graders."
For many years teaching evaluations were private (often anonymous)
communications between students and teachers. When colleges commenced to share
teaching evaluations with department heads, deans, and promotion/tenure
committees, grade inflation commenced to soar. When employers commenced to
refuse to even interview students below a B+ or A- overall grade average,
college students commenced to lobby intensely for higher grades.
Especially vulnerable are assistant professors whose careers are on the line
when their teaching evaluations are shared with promotion and tenure committees.
Especially vulnerable are all professors in colleges that share teaching
evaluations with the entire college community and/or the world. Also vulnerable
are over a million professors who are on public display at RateMyProfessor.com
--- http://www.ratemyprofessors.com/
Is
this extreme grade inflation or what?
"Bill Gates Never Attended Any
Classes He Signed up for at Harvard --- But He Got As Anyway," by Megan
Willett, Tech Insider via Business Insider, March 9, 2016 ---
http://www.businessinsider.com/bill-gates-never-attended-class-at-harvard-2016-3
Fired Because He Wouldn't Dumb Down a Course?
AAUP report concludes that a professor at Community College of Aurora was likely
fired for refusing to compromise on rigor in his courses as part of a "student
success" initiative.
https://www.insidehighered.com/news/2017/03/29/aaup-report-says-adjunct-professor-was-likely-fired-insisting-rigor-courses?utm_source=Inside+Higher+Ed&utm_campaign=094f010213-DNU20170329&utm_medium=email&utm_term=0_1fcbc04421-094f010213-197565045&mc_cid=094f010213&mc_eid=1e78f7c952
Jensen Comment
There are always exceptions, but in general tough academic courses get lower
student evaluations. Exhibit A contains the "Level of Difficulty" ratings among
the top teachers on RateMyProfessors.com ---
http://www.ratemyprofessors.com/blog/toplist/highest-rated-university-professors-of-2015-2016
Also see
http://www.ratemyprofessors.com/blog/toplist/highest-rated-junior-and-community-college-professors-of-2015-2016
"Grade Inflation—Why Princeton Threw in the Towel," by Russell K.
Nieli, Minding the Campus, October 15, 2014 ---
http://www.mindingthecampus.com/2014/10/grade-inflation-why-princeton-threw-in-the-towel/
Thank you Barry Rice for the heads up!
To obtain data on GPA trends, click on the
institution of interest. Median grades of graduates, the 50th percentile of a
graduating class, will be about 0.1 higher than the GPAs shown here. When data
sources do not indicate how GPAs were computed, this is denoted as "method
unspecified." All non-anonymous sources are stated on the data sheets.
gradeinflation.com, Copyright 2002, Stuart Rojstaczer,
www.stuartr.com, no fee for not-for-profit use
"Thomas Lindsay says 43 percent of college grades are A's, up 28
percentage points from 1960," by Thomas Lindsay, PolitiFact, January 12,
2013 ---
http://www.politifact.com/texas/statements/2013/jan/31/thomas-lindsay/thomas-lindsay-says-43-percent-college-grades-are-/
"U. of Iowa Accidentally Shares GPAs of 2,000 Students,"
Inside Higher Ed, February 8, 2013 ---
http://www.insidehighered.com/quicktakes/2013/02/08/u-iowa-accidentally-shares-gpas-2000-students
Jensen Comment
If a recipient of that email would share the data, it would be interesting to
see statistics (means, median, standard deviations, and kurtosis) of those grade
distributions. Iowa is reported to have a somewhat lower grade inflation problem
than the more extreme cases of grade inflation reported at
http://www.gradeinflation.com/
The high grade inflation universities include Duke, Dartmouth, Harvard, Furman,
UC Berkeley, and Michigan.
Chronicle of Higher Education
Graduation Rates Are Rising, but Is That Because Standards Are Slipping?
---
https://www.chronicle.com/article/Graduation-Rates-Are-Rising/246480?utm_source=at&utm_medium=en&cid=at
Jensen Comment
The biggest disgrace in higher education from community colleges to the Ivy
League is grade inflation where median grade averages moved from C+ in the 1950s
to A- in the 21st Century ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Reasons are complicated and varied, but a major causes are pressures to graduate
everybody, rise in importance of grades for jobs and graduate studies, and
the increased power of student teaching evaluations on faculty tenure and
promotion and retention decisions. Virtually all the top teachers on
RateMyProfessors.com are easy graders. A few universities like Princeton and
Cornell tried to bring down the majority of A and A- grades courses. These
efforts became abandoned failures. Harvard never even tried to bring down grade
inflation. A newly-hired professor who gives a median C+ grade in courses
probably won't be rehired due to low teaching evaluations ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
In K-12 grade inflation is even worse high school students getting diplomas who
cannot functionally read, write, or compute the APR interest rate on a car loan
(even with a calculator or computer). Those that go to college may never have to
write a term paper, and the minorityt assigned to write a term paper can
easily buy term papers online.
Welcome the USA's higher education colleges and universities on Lake Wobegon ---
https://en.wikipedia.org/wiki/Lake_Wobegon
Chronicle of Higher Education
Cal State’s Retreat From Remediation Stokes Debate on College Readiness
---
https://www.chronicle.com/article/Cal-State-s-Retreat-From/241227?utm_source=at&utm_medium=en&cid=at
The problem with remedial courses is that you had to pass them to move upward
California State University’s (system-wide) decision to eliminate all
noncredit remedial classes next fall will either remove roadblocks to
success for struggling students or set more of them up for failure,
depending on whom you ask.
The shift at the nation’s largest public-university system comes at a time
of intense national scrutiny into how colleges should decide who is ready
for college-level classes and how best to bring those who aren’t ready up to
speed.
Four in 10 entering freshmen at Cal State must complete at least one
remedial course before they can start earning college credit. The system’s
chancellor, Timothy P. White, thinks that’s one reason for Cal State’s
dismal 19-percent four-year graduation rate. The system has committed to
doubling that, to 40 percent, by 2025, and hopes that jettisoning remedial
classes will help.
Across the
country, colleges with similarly high dropout rates are questioning whether
the classes do more harm than good. Advocates say that as part of a
broader umbrella of developmental education,
which also includes tutoring and counseling, the courses are crucial for
students who start out far behind their peers.
Continued in article
Welcome to Lake Wobegon's system of tutors and counselors who pass everybody
upward without assigning low grades to anybody ---
https://en.wikipedia.org/wiki/Lake_Wobegon
Besides reading reading, writing, and arithmetic are obsolete skills that
increasingly are being passed on to robots.
Your lousy SAT score will be adjusted upward if you graduated from a high school
with rock-bottom academic standards ---
https://www.chronicle.com/article/Cal-State-s-Retreat-From/241227?utm_source=at&utm_medium=en&cid=at
And you will graduate from college as long as you attend classes and look like
you're trying.
"Are Your Students Learning From Their Mistakes?" by David Goobler,
Chronicle of Higher Education, June 1, 2016 ---
https://chroniclevitae.com/news/1421-are-your-students-learning-from-their-mistakes?cid=at&utm_source=at&utm_medium=en&elqTrackId=d5b397c2094347e1b0e824611a75a491&elq=1158b22a0ab54272a738491e2c6538ab&elqaid=9288&elqat=1&elqCampaignId=3251
Jensen Comment
If instructors are not giving mostly A grades
in a course large-scale empirical studies show that students adapt to
what counts most for grades. For example, most of them will dig in there
heels and do whatever it takes at critical points in the grading
process. This is widely known as the "no-significant-difference"
phenomenon.
http://faculty.trinity.edu/rjensen/assess.htm#AssessmentIssues
When given second chances a common strategy is
to wing it the first time and check the grade. If the grade is low
students dig in like they should have the first time. One huge problem
with second chances is that this policy contributes to the biggest
scandal in education in recent years --- grade inflation where the
median grade across the North America tends to be A-.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Of course the main cause of grade inflation is
having teacher evaluations affect performance evaluations and tenure.
Second chance teachers most likely get higher teacher evaluations.
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
If second-chance teachers are giving mostly A
grades something is wrong with academic standards.
Here's An Illustration of Grade Inflation
"Nearly Half Of Detroit’s Adults Are
Functionally Illiterate, Report Finds,"
Huffington Post, July 8, 2013 ---
http://www.huffingtonpost.com/2011/05/07/detroit-illiteracy-nearly-half-education_n_858307.html
Detroit’s population
fell by 25 percent in the last decade. And of those that stuck around,
nearly half of them are functionally illiterate, a new report finds.
According to estimates
by The National Institute for Literacy, roughly 47 percent of adults in
Detroit, Michigan — 200,000 total — are “functionally illiterate,” meaning
they have trouble with reading, speaking, writing and computational skills.
Even more surprisingly, the Detroit Regional Workforce
finds half of that
illiterate population has obtained a high school degree.
The DRWF report places
particular focus on the lack of resources available to those hoping to
better educate themselves, with fewer than 10 percent of those in need of
help actually receiving it. Only 18 percent of the programs surveyed serve
English-language learners, despite 10 percent of the adult population of
Detroit speaking English “less than very well.”
Additionally,
the report finds, one in three workers in the state of Michigan lack the
skills or credentials to pursue additional education beyond high school.
In March, the Detroit
unemployment rate hit 11.8 percent, one of the highest in the nation, the
U.S. Bureau of Labor Statistics reported last month. There is a glimmer of
hope, however: Detroit’s unemployment rate dropped by 3.3 percent in the
last year alone.
Continued in article
Jensen Question
Will nearly all the illiterate high school graduates in Detroit get a free
college diploma under the proposed "free college" proposal?
My guess is that they will get their college diplomas even though they will
still be illiterate, because colleges will graduate them in order to sop up the
free taxpayer gravy for their college "education."
Everybody will get a college diploma tied in a blue ribbon.
I doubt that illiteracy is much worse in Detroit than in other large USA
cities like Chicago and St Louis.
In Europe less than have the Tier 2 (high school) graduates are even allowed
to to to college or free trade schools ---
OECD Study Published in 2014: List of countries
by 25- to 34-year-olds having a tertiary education degree ---
https://en.wikipedia.org/wiki/List_of_countries_by_25-_to_34-year-olds_having_a_tertiary_education_degre
Former Harvard University President Laments Grade Inflation ---
http://larrysummers.com/2016/04/01/if-we-really-valued-excellence-we-would-single-it-out/
. . .
In any event, I think that the pervasiveness of top
grades in American higher education is shameful. How can a society that
inflates the grades of its students and assigns the top standard to average
performance be surprised when its corporate leaders inflate their earnings,
its generals inflate their body counts, or its political leaders inflate
their achievements?
More than ethics classes this is a matter of moral
education. And America’s universities are failing when “A” is the most
commonly-awarded grade. If we really valued excellence, we would single it
out.
I did succeed in a small way as Harvard president
in reducing the fraction of students graduating with honors from a ludicrous
90 percent to an excessive 55 percent. I wish I had been able to do more.
Even more I wish that today’s academic leaders would take up this issue.
-
See more at:
http://larrysummers.com/2016/04/01/if-we-really-valued-excellence-we-would-single-it-out/#sthash.Pyptylxk.dpuf
Jensen Comment
Grade inflation exploded when student evaluations commenced to play a crucial
role in tenure decisions and faculty pay.
2016 Update on Outrageous Grade Inflation in the USA
(especially in prestigious universities but not quite as scandalous in community
colleges)
B, D, and F Grades are relatively stable, but in Lake Woebegon A Grades rose
from 11.5% in 1940 to 45.5% in 2013 (read that as nearly half). The median grade
in most courses in A- except in community colleges.
Grade Distributions 1940-2013
"The rise of the ‘gentleman’s A’ and the GPA arms race," by Catherine
Rampell, The Washington Post, March 28, 2016 ---
https://www.washingtonpost.com/opinions/the-rise-of-the-gentlemans-a-and-the-gpa-arms-race/2016/03/28/05c9e966-f522-11e5-9804-537defcc3cf6_story.html?postshare=1381459215004789&tid=ss_tw
The waters of Lake Wobegon have
flooded U.S. college campuses. A’s — once reserved for recognizing
excellence and distinction — are today the most commonly awarded grades in
America.
That’s true
at both Ivy League institutions and community colleges, at huge flagship
publics and tiny liberal arts schools, and in English, ethnic studies and
engineering departments alike. Across the country, wherever and whatever
they study, mediocre students are increasingly likely to receive supposedly
superlative grades.
Such is the takeaway of
a massive new report on grade inflation from
Stuart Rojstaczer, a former Duke University professor, using data he and
Furman University professor Chris Healy collected. Analyzing 70 years of
transcript records from more than 400 schools, the researchers found that
the share of A grades has tripled, from just 15 percent of grades in 1940 to
45 percent in 2013. At private schools, A’s account for nearly a majority of
grades awarded.
These
findings raise questions not only about whether the United States has been
watering down its educational standards — and hampering the ability of
students to compete in the global marketplace in the process. They also lend
credence to the perception that campuses leave their students coddled,
pampered and unchallenged, awarding them trophies just for showing up.
So, what’s
behind the sharp rise in GPAs?
Students
sometimes argue that their talents have improved so dramatically that they
are deserving of higher grades. Past
studies, however, have found little evidence of
this.
Continued in article
Jensen Comment
In my opinion there are two major causes of grade inflation.
Cause 1 is that the C grade became tantamount to an F grade
in both the job market and the for admission to graduate schools.
Cause 2 is the changed policy of making student evaluations
of teachers key to tenure and pay for teachers. This dependency made it
necessary to do everything possible to avoid negative reviews, including making
it hard to get an A grade in a course. Virtually all the top-rated professors on
Rate-My-Professor.com are also rated by students as easy graders ---
http://www.ratemyprofessors.com/
Teachers viewed as tough graders take a hit from their students.
How to Mislead With Statistics
"The 50 Colleges Where Students Work the Hardest," by Emmie Martin,
Business Insider, December 29, 2015 ---
http://www.businessinsider.com/colleges-with-the-hardest-working-students-2015-12
Jensen Comment
This article really does not provide evidence that students work harder at these
50 colleges and universities relative to other universities.
The list includes 50 top universities with a bend toward private
universities such as Ivy League-type universities and expensive liberal arts
colleges like Swarthmore. If Ms. Martin had said that the students are suspected
of working hard because they were admitted to these colleges that mostly are
very hard to get into I might be inclined to agree that they had a great work
ethic before being admitted. But this does not mean that they work harder if
getting A grades is relatively easy after being admitted. In fact I would
hypothesize that students going to other top universities who were not able to
get into the 50 colleges above worked harder to prove themselves in college.
However, it's not clear that these students work as hard as the
students in these colleges work as hard today as the students in these same
universities worked 60 years ago when there was much, much greater competition
for grades. Grade inflation in most of these universities (I think virtually
all) is the most pronounced among all universities --- See the above links.
If the median grade in most courses is A- these days students do not have to
work as hard for top grades as years ago when these courses in these same
universities had median grades of C.
To my knowledge Princeton University is the only university in
the above list that made a concerted effort to limit the number of A grades
given in most courses. However, after Harvard tried to lure applicants to
Princeton away, with promises of easier A grades at Harvard, Princeton dropped
its effort to limit the percentage of A grades in courses.
We may think students work harder in prestigious universities
but in a five-year study of publishing over 800,000 course grades at Cornell
University it was discovered that students flocked to instructors who gave the
highest percentage of A grades rather than those who made students work harder
for A grades.
The bottom line is that Emmie Martin in no way convinced
me that students of those 50 grade-inflated universities work the hardest. I
think they may work the least when grades are easier to get such as at Harvard.
Harvard University expelled over 60 students who cheated (plagiarized) each
others' work on an assignment in a political science course where the instructor
promised everybody an A grade if they simply did the assignments and took the
examinations. When assured of an A grade they reasoned that doing the work would
not be worth the effort since doing the work well would not improve their grades
---
http://faculty.trinity.edu/rjensen/Plagiarism.htm#NewKindOfCheating
My Hypothesis:
Students do not work the hardest in grade-inflated universities. There are some
exceptions of course. Even pre-med students at the Ivy League students work
their butts off in science courses even when they are assured of getting A
grades, because they know they will one day have to take a very competitive MCAT
admissions test for medical school. The same applies to engineering, accounting,
and other students in majors that have licensing examinations after graduation.
In Accounting We Call it EIEO ---
Everything In, Everything Out
"Cranking Out Credentials — but What
About Quality?" by Katherine Mangan, Chronicle of Higher Education,
November 17, 2015 ---
http://chronicle.com/article/Cranking-Out-Credentials-/234228?cid=wc&utm_source=wc&utm_medium=en&elq=e04cc3e81c37409fa69422fd0133d152&elqCampaignId=1874&elqaid=6943&elqat=1&elqTrackId=befc899056144718b79a868368dc3c88
Jensen Comment
Exhibit A is comprised of all graduate programs that only give A or B grades to
any student who makes an effort --- some B grades go to students who make an
effort but would not have a chance in a competency-based examination. For
example, increasingly law schools are now both admitting and graduating a large
number of students who do not have a chance of passing the BAR examination. Many
accounting graduates are afraid to even take the CPA examination. California's
Two-Year Colleges now want to fire their accreditor and bring in an easier
accrediting agency.
The EIEO phenomenon is linked to grade
inflation and the power students now have over teacher performance ratings ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Taking a RateMyProfessors.com (RMP) Hit for Tough Grading
The national competition below has nothing whatsoever to do with RMP
"4 Professors of the Year Are Honored for Excellence in Teaching and Service,"
by Kate Stoltzfus, Chronicle of Higher Education, November 19, 2015 ---
http://chronicle.com/article/4-Professors-of-the-Year-Are/234266?cid=at&utm_source=at&utm_medium=en&elq=9bb456361c274fdc9ab06414d3c75bda&elqCampaignId=1887&elqaid=6955&elqat=1&elqTrackId=82fc8b62b32d40a9ba12a04e24126998
Most
professors hope to have an impact on their students, but their work usually
takes place behind classroom doors. For the national recipients of the
2015 U.S.
Professor of the Year Awards, their influence on
their campuses is now rippling outward.
. . .
Community Colleges
Amina El-Ashmawy, a professor of chemistry at Collin
College, in Texas
When the cost of textbooks spiked, Ms. El-Ashmawy decided to
write her own curriculum with colleagues at Collin College
so that every student could get access to the materials for
her chemistry lab. She has served on American Chemical
Society exam committees and has collected data to improve
the college’s approaches to learning. Because chemistry can
be abstract, Ms. El-Ashmawy uses everyday examples to make
science relevant and wants students to feel free to make
mistakes as they learn. She says that, after she graduated,
the pay in laboratory work was "enticing," but such work
"didn’t excite me the way teaching did."
Continued in article
Jensen Comment
This is a grade inflation era when most professors across the USA, trembling in
fear of student evaluations that affect their tenure and performance
evaluations, are good teachers with one flaw --- they've become easy graders and
thus caused the grade inflation in virtually all colleges and universities ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Each year I look up the four Professor of the Year winners on
RateMyProfessors.com for insights into what makes them award-winning Professors
of the Year in a national competition that has nothing to do with
RateMyProfessors.com
And yes I am aware of all the possibly misleading results on RateMyProfessor.com.
Firstly the sample sizes are relatively small and respondents are
self-selecting. But I study RMP a lot since it is entertaining and well as
informative. What I find is that contrary to popular opinion great numbers of
respondents praise rather than lambaste their teachers. I don't pay much
attention to the rating numbers, but I do like to read the subjective comments
of students. Often they are quite insightful about teaching.
Virtually all the time these four
award-winning professors also rate high on RateMyProfessors.com for outstanding
reasons of caring for students, dedication to teaching, and teaching quality.
But the sad news is that nearly always they also are rated as "Easy" teachers in
terms of grading.
A noteworthy exception is the 2015
Award Winning Professor El-Ashmawy cited above who is apparently a hard grader
willing to take a hit on her teaching evaluations ---
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=225916
Some of the 70 RMP respondents describe
her as an incredible teacher. I suspect there would be many more who would have
done so if her median grades were A- in every course.
Professor El-Ashmawy should also
receive the Courage of the Year Award if there was such an award. I am really,
really glad that she received a coveted national 2015 Professors of the Year
Award without selling her soul out to grade inflation pressures.
Bravo!
I might also note that she teaches
online as well as onsite and must work night and day to perfect her craft.
RateMyProfessors.com ---
http://www.ratemyprofessors.com/
"Professors Read Mean Student Evaluations," by Paul Caron, TaxProf
Blog, June 28, 2014 ---
http://taxprof.typepad.com/taxprof_blog/2014/06/professors-read-.html
"Lower Education," by Michael Morris, Inside Higher
Ed, September 9, 2011 ---
Click Here
http://www.insidehighered.com/views/2011/09/09/morris_essay_on_faculty_responsibility_for_decline_in_college_student_standards
"When Students Rate Teachers,
Standards Drop Why do colleges tie academic careers to winning the approval of
teenagers? Something is seriously amiss," by Lyell Asher, The Wall Street
Journal, October 27, 2013 ---
http://online.wsj.com/news/articles/SB10001424052702304176904579115971990673400?mod=djemEditorialPage_h
"Why We Inflate Grades," by Peter Eubanks, Inside
Higher Ed, August 9, 2011 ---
http://www.insidehighered.com/views/2011/08/09/essay_on_why_faculty_members_participate_in_grade_inflation
Grade distribution by course and section at UW-Madison (since 2004)---
http://www.metafilter.com/110462/Grade-distribution-by-course-and-section-at-UWMadison
"Most Frequently Awarded Grade at Harvard: A," Inside
Higher Ed, December 4, 2013 ---
http://www.insidehighered.com/quicktakes/2013/12/04/most-frequently-awarded-grade-harvard
Mode = A
Median = A-
In the 1940s both the mode and the median grade was C (the historic average
performance grade).
Jensen Comment
It would be sad if it was just the Ivy League that gave out such high median
grades. But these days high grades are given out in virtually all USA
colleges and universities ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Look at the data tables and charts
The scandal in higher education is grade
inflation.
Virtually all USA universities and especially the elite universities have moved
median course grades from C in the 1940s to A- in the 21st Century such that
graduating high grades no longer means as much. The coin of an education is
badly cheapened by grade inflation where students receive high grades without
much real learning ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Grade inflation also exists in other nations,
but many other nations are different from the USA (where slow learners can
always be admitted to some college) in that only the intellectually elite
are allowed to go to college ---
OECD Study Published in 2014: List of
countries by 25- to 34-year-olds having a tertiary education degree ---
https://en.wikipedia.org/wiki/List_of_countries_by_25-_to_34-year-olds_having_a_tertiary_education_degree
Whereas nations like Finland and Germany only
admit elite and motivated learners into colleges, what Bernie Sanders intends is
that virtually anybody who wants to can be admitted to college for free. Sanders
most likely hopes that the unmotivated and low-aptitude admissions will not
graduate, but there are not many such academic standards in this era of grade
inflation ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
In my opinion college diplomas will
mean less and less as the 21st Century unfolds even though taxpayers will be
shelling out billions for degrees not worth the sheepskin they're printed
on.
December 25, 2015 reply from Amy Haas
In my experience , College
for all turns college into high school. Students enter my urban community
college classroom with weak academic skills expecting college to be a
continuation of high school. Show up earn a C, do a little work get an A.
Many of them never earn a degree. College for all has resulted in a dumbing
down of the academic curriculum and consequently the value of the college
degree. Students exit with lots of debt and a degree that does not open
doors that it once did.
Amy Haas
KBCC
Brooklyn, NY
"As Graduation Rates Rise, Experts Fear Diplomas Come Up Short," by
ShortMotoko Rich, The New York Times, December 26, 2015 ---
http://www.nytimes.com/2015/12/27/us/as-graduation-rates-rise-experts-fear-standards-have-fallen.html?hpw&rref=education&action=click&pgtype=Homepage&module=well-region®ion=bottom-well&WT.nav=bottom-well&_r=0
. . .
It is a pattern repeated in other school districts across the state and
country — urban, suburban and rural — where the
number of students earning high
school diplomas has risen to historic peaks, yet measures of academic
readiness for college or jobs are much lower.
This has led educators to question the real value of a high school diploma
and whether graduation requirements are too easy.
Continued in article
"Teacher assails practice of giving passing grades to failing students,"
by Jay Mathews, The Washington Post, May 17, 2014 ---
http://www.washingtonpost.com/local/education/teacher-assails-practice-of-giving-passing-grades-to-failing-students/2015/05/17/f38f88ae-f9ab-11e4-9030-b4732caefe81_story.html
Caleb Stewart Rossiter, a college professor and
policy analyst, decided to try teaching math in the D.C. schools. He was
given a pre-calculus class with 38 seniors at H.D. Woodson High School. When
he discovered that half of them could not handle even second-grade problems,
he sought out the teachers who had awarded the passing grades of D in
Algebra II, a course that they needed to take his high-level class.
There are many bewildering stories like this in
Rossiter’s new book, “Ain’t
Nobody Be Learnin’ Nothin’: The Fraud and the Fix for High-Poverty Schools,”
the best account of public education in the nation’s
capital I have ever read. It will take me three columns to do justice to his
revelations about what is being done to the District’s most distracted and
least productive students.
Teachers will tell you it is a no-no to ask other
teachers why they committed grading malpractice. Rossiter didn’t care. Three
of the five teachers he sought had left the high-turnover D.C. system, but
the two he found were so candid I still can’t get their words out of my
mind.
The first, an African immigrant who had taught
special education, was stunned to see one student’s name on Rossiter’s list.
“Huh!” Rossiter quoted the teacher as saying. “That boy can’t add two plus
two and doesn’t care! What’s he doing in pre-calculus? Yes of course I
passed him — that’s a gentleman’s D. Everybody knows that a D for a special
education student means nothing but that he came in once in a while.”
Continued in article
RateMyProfessors.com ---
http://www.ratemyprofessors.com/
"Professors Read Mean Student Evaluations," by Paul Caron, TaxProf
Blog, June 28, 2014 ---
http://taxprof.typepad.com/taxprof_blog/2014/06/professors-read-.html
Stanford University:
Launched in August 2016,
Carta aggregates information from recent student evaluations and 15 years of
registrar records, including each course’s workload and grade distribution.
Students can visualize a weekly schedule and compare the intensity of their
planned course load with that of previous quarters. More than 90 percent of
undergraduates have used Carta since it launched.
https://medium.com/stanford-magazine/choosing-classes-with-a-little-help-from-big-data-e87a35463341
Jensen Comment
Carta at Stanford is very similar to a similar experiment conducted and
eventually (after several years) abandoned at Cornell University. The main
purpose appears to differ somewhat. Whereas the Cornell experiment was largely
motivated to combat grade inflation, Stanford's Carta purportedly is designed to
help students choose courses. In some respect the outcome may be the same as
students flock to the easy courses and easy-grading professors. Grade inflation
at both universities is a huge problem exacerbated by teaching evaluations ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Cornell's faculty senate elected to stop revealing grade distributions of
courses and individual faculty ---
http://news.cornell.edu/stories/2011/05/faculty-senate-vote-may-help-stop-grade-inflation
It just got harder to
shop around for good grades at Cornell.
The Faculty Senate voted
May 11 to stop posting course median grades on a university website. The
resolution, aimed at ending grade inflation, passed by a margin of about
3-to-1, according Dean of the University Faculty William Fry.
The resolution states that students have been using online information on
course median grades -- halfway between the lowest and highest -- to sign up
for classes in which higher grades are awarded, contributing to grade
inflation at Cornell. The Office of the University Registrar's website has
reported median grades since
1998.
Research by two Cornell
professors provided the resolution's rationale. Assistant professor of
economics Talia Bar, professor of marketing and economics Vrinda Kadiyali
and an Israeli colleague of the two showed in a 2009 paper that the
availability of "grade information online induced students to select
leniently graded courses -- or in other words, to opt out of courses they
would have selected absent considerations of grades."
The paper, "Grade
Information and Grade Inflation: The Cornell Experiment," was published in
the Journal of Economic Perspectives. "It seemed like a very thorough
evaluation, a very rational approach," said associate professor of
nutritional sciences Charles McCormick, who presented the resolution on
behalf of the senate's Educational Policy Committee.
The Office of the
University Registrar will continue to record median grades offline but make
them available only to deans, department chairs and those needing the data
for research.
The Faculty Senate has
asked Vice President for Student and Academic Services Susan Murphy, who
oversees the Office of the University Registrar, to implement the ruling.
The May 11 resolution
reverses part of the senate's 1996 decision to post course median grades
online and to include them on students' transcripts. The stated rationale at
that time was that students, faculty and others trying to evaluate
transcripts would benefit from information enabling them to interpret course
grades. And the presence of median grades on transcripts, the senate
reasoned, might encourage students to take courses with relatively low
median grades. Median grades appeared online immediately after the 1996
resolution, but technological obstacles precluded their appearance on
transcripts until fall 2008.
In May 2009, the senate
tabled a resolution essentially identical to the one it passed May 11.
"Since median grades had just begun appearing on transcripts, some senators
felt that we hadn't had time to see how the intent of the 1996 resolution
would play out -- that is, perhaps now that median grades were also on
transcripts, students wouldn't be so quick to choose courses with high
medians," said associate professor of electrical engineering David Delchamps.
But in fact, McCormick
said, the policy has "had the opposite effect."
Continued in article
Former Duke University geology professor Stuart Rojstaczer collected more
data from more universities than any other individual in history ---
https://en.wikipedia.org/wiki/Stuart_Rojstaczer
NYT opinion article by Professor Rojstaczer: Student Evaluations Lead
to Bad Data and That Leads to Bad Decisions ---
https://www.nytimes.com/roomfordebate/2012/09/17/professors-and-the-students-who-grade-them/student-evaluations-offer-bad-data-that-leads-to-the-wrong-answer
Purportedly Princeton university in 2004 started doing more than the other
Ivy League universities to limit the number of A grades somewhat, although
participation by faculty is voluntary. Cornell's efforts to embarrass faculty
about grade inflation by publishing grading distributions of all courses each
term was deemed a failure in curbing grade inflation. The program was dropped by
Cornell. Princeton's program for capping the number of A grades to 35% in most
classes may now be rescinded.
"Harvard Students Told College Applicants Not To Go To Princeton Because
They Wouldn't Get As Many 'A's'," by Peter Jacobs, Business Insider, August
8, 2014 ---
http://www.businessinsider.com/harvard-students-college-applicants-not-to-go-to-princeton-2014-8
Students at top colleges across the country —
including Harvard, Yale, and Stanford —
used Princeton University's limit on A range grades to dissuade potential
applicants from attending the New Jersey Ivy,
according to a new report from Princeton.A
2004 policy adopted by Princeton sought to end grade inflation at the
university by recommending that departments place a 35% cap on A-range
grades for each academic course. However, The New York Times reports,
students have resisted the policy since it was
implemented a decade ago, saying that it
devalued their work and potentially gave their peers at rival schools a
competitive edge with post-graduate opportunities.
Now, Princeton may change its grading policy
following the release this week of a report commissioned by Princeton
President Christopher Eisgruber. The report recommends that Princeton remove
the "numerical targets" from their grading policy, as they are often
misunderstood as quotas or inflexible caps.
The report also found that this policy
inadvertently led potential applicants and their families to question
whether they should apply to Princeton, with students at other highly ranked
schools citing the policy to recruit applicants elsewhere:
The
perception that the number of A-range grades is limited sends the message
that students will not be properly rewarded for their work. During the
application process, students and parents consider the possible
ramifications in terms of reduced future placement and employment potential
... Janet Rapelye, Dean of Admission, reports that the grading policy is the
most discussed topic at Princeton Preview and explains that prospective
students and their parents see the numerical targets as inflexible. The
committee was surprised to learn that students at other schools (e.g.,
Harvard, Stanford, and Yale) use our grading policy to recruit against us.
Harvard made news last December
when it confirmed that the most common grade given to undergraduates is an
"A" and the median grade is an "A-." The Yale
Daily News has also
reported that 62% of students' grades were in the A-range.
"Type-A-Plus Students Chafe at Grade Deflation," by Lisa Foderaro,
The New York Times, January 29, 2010 ---
http://www.nytimes.com/2010/01/31/education/31princeton.html?hpw
When Princeton University set out six years ago to
corral galloping grade inflation by putting a lid on A’s, many in academia
lauded it for taking a stand on a national problem and predicted that others
would follow.
But the idea never took hold beyond Princeton’s
walls, and so its bold vision is now running into fierce resistance from the
school’s Type-A-plus student body.
With the job market not what it once was, even for
Ivy Leaguers, Princetonians are complaining that the campaign against
bulked-up G.P.A.’s may be coming at their expense.
“The nightmare scenario, if you will, is that you
apply with a 3.5 from Princeton and someone just as smart as you applies
with a 3.8 from Yale,” said Daniel E. Rauch, a senior from Millburn, N.J.
The percentage of Princeton grades in the A range
dipped below 40 percent last year, down from nearly 50 percent when the
policy was adopted in 2004. The class of 2009 had a mean grade-point average
of 3.39, compared with 3.46 for the class of 2003. In a survey last year by
the undergraduate student government, 32 percent of students cited the
grading policy as the top source of unhappiness (compared with 25 percent
for lack of sleep).
In September, the student government sent a letter
to the faculty questioning whether professors were being overzealous in
applying the policy. And last month, The Daily Princetonian denounced the
policy in an editorial, saying it had “too many harmful consequences that
outweigh the good intentions behind the system.”
The undergraduate student body president, Connor
Diemand-Yauman, a senior from Chesterland, Ohio, said: “I had complaints
from students who said that their professors handed back exams and told
them, ‘I wanted to give 10 of you A’s, but because of the policy, I could
only give five A’s.’ When students hear that, an alarm goes off.”
Nancy Weiss Malkiel, dean of the undergraduate
college at Princeton, said the policy was not meant to establish such grade
quotas, but to set a goal: Over time and across all academic departments, no
more than 35 percent of grades in undergraduate courses would be A-plus, A
or A-minus.
Early on, Dr. Malkiel sent 3,000 letters explaining
the change to admissions officers at graduate schools and employers across
the country, and every transcript goes out with a statement about the
policy. But recently, the university administration has been under pressure
to do more. So it created a question-and-answer booklet that it is now
sending to many of the same graduate schools and employers.
Princeton also studied the effects on admissions
rates to top medical schools and law schools, and found none. While the
number of graduates securing jobs in finance or consulting dropped to 169
last year from 249 in 2008 and 194 in 2004, the university attributed the
falloff to the recession. (Each graduating class has about 1,100 students.)
But the drop in job placements, whatever the cause,
has fueled the arguments of those opposed to the policy. The grading change
at Princeton was prompted by the creep of A’s, which accelerated in the
1990s, and the wildly divergent approaches to grading across disciplines.
Historically, students in the natural sciences were graded far more
rigorously, for example, than their classmates in the humanities, a gap that
has narrowed but that still exists.
Some students respect the tougher posture. “What
people don’t realize is that grades at different schools always have
different meanings, and people at Goldman Sachs or the Marshall Scholarship
have tons of experience assessing different G.P.A.’s,” said Jonathan
Sarnoff, a sophomore who sits on the editorial board of The Daily
Princetonian. “A Princeton G.P.A. is different from the G.P.A. at the
College of New Jersey down the road.”
Faye Deal, the associate dean for admissions and
financial aid at Stanford Law School, said she had read Princeton’s
literature on the policy and continued “to view Princeton candidates in the
same fashion — strong applicants with excellent preparation.”
Goldman Sachs, one of the most sought-after
employers, said it did not apply a rigid G.P.A. cutoff. “Princeton knows
that; everyone knows that,” said Gia Morón, a company spokeswoman,
explaining that recruiters consider six “core measurements,” including
achievement, leadership and commercial focus.
But Princetonians remain skeptical.
“There are tons of really great schools with really
smart kids applying for the same jobs,” said Jacob Loewenstein, a junior
from Lawrence, N.Y., who is majoring in German. “People intuitively take a
G.P.A. to be a representation of your academic ability and act accordingly.
The assumption that a recruiter who is screening applications is going to
treat a Princeton student differently based on a letter is naïve.”
Stuart Rojstaczer, a retired professor at Duke who
maintains a Web site dedicated to exposing grade inflation, said that
Princeton’s policy was “something that other institutions can easily
emulate, and should emulate, but will not.” For now, Princeton and its
students are still the exception. “If that means we’re out in a leadership
position and, in a sense, in a lonelier position, then we’re prepared to do
that,” Dr. Malkiel said. “We’re quite confident that what we have done is
right.”
Jensen Comment
Some of the pressure to limit the number of A grades comes from the very best
students admitted to an Ivy League university. They feel that it is no longer
possible to demonstrate that they are cream of the crop graduates when 80% of
the graduating class graduates cum laude, as in the case of Harvard
University.
The very best students in graduate professional programs like prestigious MBA
programs. voice the same complaints if most of the students in every course
receive top grades.
Faculty no longer can be relied upon for tougher grading in virtually all
colleges and universities since, in most instances, student teaching evaluations
are now shared with administrators and promotion/tenure committees ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
As a result, grade inflation is rampant across the USA with median course grades
now in the A- to B+ range ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
It's a national disgrace in the USA both in higher education and K-12
education.
I was hoping that there were enough genius students applying to Princeton
such that it could hang tough in its program to limit the proportion of A grades
in undergraduate courses. Apparently this is no longer the case!
Good Deals in Becoming a K-12 Teacher:
Easy A's and Never Get Fired Even If You Don't Show Up for Work or Molest the
Children
"Do Education Programs Dole Out Too Many Easy A’s?" by Rebecca Koenig,
Chronicle of Higher Education, November 12, 2014 ---
http://chronicle.com/article/Do-Education-Programs-Dole-Out/149947/?cid=at&utm_source=at&utm_medium=en
Are teacher-training programs rigorous enough? A
new study, completed by a group that has long been critical of the quality
of teacher preparation, makes the case that they’re not.
Education students face easier coursework than
their peers in other departments, according to the study, and they’re more
likely to graduate with honors.
The report—"Easy
A’s and What’s Behind Them," which is to be released Wednesday by the
National Council on Teacher Quality—argues that a more-objective curriculum
for teaching candidates would better prepare them for careers in the
classroom.
"We’re out to improve training," said Julie
Greenberg, the report’s co-author, who is a senior policy analyst for
teacher-preparation studies for the advocacy group. "We want teacher
candidates to be more confident and competent when they get in the classroom
so their students can benefit from that."
Continued in article
"‘Easy A’s’ Gets an F," by Donald E. Heller, Chronicle of Higher
Education, November 14, 2014 ---
http://chronicle.com/article/Easy-A-s-Gets-an-F/150025/?cid=at&utm_source=at&utm_medium=en
Monsters in the Classroom: NYC Teachers Union Reinstates Alleged Molesters
---
http://www.breitbart.com/Big-Government/2012/04/14/monsters-in-the-classroom
Or when pedophiles are too dangerous for children they are sent to a
"Rubber Room" where they receive full pay every year for doing nothing ---
http://canadafreepress.com/index.php/article/31554
Rubber Room Reassignment Center Controversies (not all are pedophiles)
---
http://en.wikipedia.org/wiki/Reassignment_centers
Rubber rooms are spread across the USA and are not just in NYC
Keeping Molesters in the Classroom is Not Always the Fault of Teachers
Unions ---
http://modeducation.blogspot.com/2012/07/incompetent-administrators-not-unions.html
The fault often lies in fears of being sued and fears
of bad publicity (especially in expensive private schools)
Jensen Comment
I know of a case in Maine where a tenured high school teacher started missing
half her classes. After countless warnings she was eventually put on leave, but
she got two more years on leave at full pay before she reached retirement age.
This is one way for an older teacher to get two added years of retirement pay
and medical insurance before reaching retirement age. This would be a good
strategy for college professors except that it probably won't work without being
admitted to an early retirement program. Most colleges don't have such generous
early retirement programs.
As far as easy grades go, with colleges across the USA having median grades
of A- for most disciplines it's hard to say that Education Departments are any
more grade inflated that other departments. However, Education Departments may
be attracting weaker students to become majors in the first place. For example,
it is usually much easier to major in math education than mathematics in most
colleges ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Grade distribution by course and section at UW-Madison (since 2004)---
http://www.metafilter.com/110462/Grade-distribution-by-course-and-section-at-UWMadison
Comments continued at
http://www.metafilter.com/110462/Grade-distribution-by-course-and-section-at-UWMadison
"To Fight Grade Inflation in the Humanities, Add Context," by Andrew
J. Perrin, Chronicle of Higher Education, July 21, 2014 ---
http://chronicle.com/article/To-Fight-Grade-Inflation-in/147793/?cid=at&utm_source=at&utm_medium=en
. . .
I am an unlikely candidate to lead grading-reform
efforts. The standard assumption is that the so-called STEM fields—science,
technology, engineering, and math—are the "hard graders,"
the humanities and most of the social sciences the
grade inflators. And my subfields—cultural sociology and social theory—are
particularly susceptible to the steady upward creep of grades
because their intellectual style is closer to the humanities than the
sciences. I suspect this pattern is due in part to the inherently subjective
nature of evaluation in humanistic fields, in part to the fact that students
don’t complain when their grades are too high, and in part to the reluctance
to exercise judgment that has characterized the humanities in recent
decades.
Whatever the causes, my experience is that grade
inflation contributes greatly to the devaluing of the humanities and some
social sciences. In fact, humanists have, if anything, more reason than our
STEM colleagues to push back against the expectation of excellent grades for
only fair performance.
Continued in article
Jensen Comment
In many colleges the Departments of Education have the biggest problems with
grade inflation. Departments of Business across the USA are mixed in terms of
grade inflation reputation. Business schools have the luxury in many colleges of
not having to beg for majors to justify offering advanced courses. Its tough to
have an advanced curriculum for less than ten graduates in a discipline. In many
cases business schools have what the college considers a disproportionate number
of majors. This gives them the ability to be tougher graders than some of the
humanities departments that are starving for majors.
Also business courses may also attract some of the less talented and less
motivated students that are more disserving of low grades. I have been in large
universities where business schools attract a disproportionate number of
students who washed out of engineering programs.
Within the business school some disciplines vary in terms of student talent
and motivation. For example, it is common for accounting departments to put
higher thresholds on overall gpa requirements to major in accounting because
students learn that jobs are more plentiful in the field of accounting.
Sometimes these requirements are quite high in the 3.0-3.5 gpa barrier threshold
to major in accounting. In turn this contributes to grade inflation in
accounting courses since there are fewer dummies to round out the grading
distribution.
But in nearly all departments within USA colleges and universities the
biggest disgrace in higher education is grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
The major reasons are teaching evaluations and the way gpa averages became keys
to the kingdom for admission to graduate schools and getting jobs.
"How to Survive Your First Years of Teaching," by Stacey Patton,
Chronicle of Higher Education's Chronicle Review, January 3, 2013 ---
https://chroniclevitae.com/news/248-how-to-survive-your-first-years-of-teaching?cid=cr&utm_source=cr&utm_medium=en
. . .
Don’t fight grade inflation. Okay, maybe just a little.
Kenneth Aslakson teaches at Union College, a
liberal-arts institution in upstate New York. He believes grade inflation is
wrong: “Students should understand that a B is a good grade and they
shouldn’t whine and cry about it.”
But when he was trying to secure tenure, he refused
to fight it.
That’s because he knew how important student
teaching evaluations were to his tenure committee. “Teaching evaluations, at
least at my school, matter and they matter a lot,” he said. “Do students
think you’re cool? Can you get along with people? These things aren’t about
how much students are learning, but they factor into how the tenure
committee evaluates you.”
If the rest of your college is giving a certain
kind of grade and you’re operating on a different scale, Aslakson said, that
can hurt you.
“When you just get out of grad school, you can be a
little out-of-touch with your expectations for your students,” he said. “I’m
not saying it is right, I’m just saying that it might not be in your best
interests to fight it.”
That’s far from a universal viewpoint, and two
panelists disagreed. Peterson, of Emory, said that she attempts to strike a
balance: She won’t hesitate to give a low grade for a lousy paper, but she
gives students a chance to rewrite.
“I give them an out from a low grade and I show
them how to learn from their mistakes and make their work stronger,” she
said. “In doing so, it changes the consumer dynamic in the classroom.”
And Maria Bollettino, of Framingham State, stuck up
for high standards. Bollettino teaches mostly first-generation students who
haven’t had opportunities to really think and write like scholars. When
those students fall short of the mark, she lets them know.
“It does students disservice to tell them that they
are awesome if they are not. If they can’t write a grammatically correct
sentence or put together a convincing argument, that’s not going to fly
later in life,” Bollettino said. “My job is to hold them to a certain
standard, to let them know if they are reaching it or not, and to prepare
them for the real world, where they are going to have to communicate well.”
-
See more at:
https://chroniclevitae.com/news/248-how-to-survive-your-first-years-of-teaching?cid=cr&utm_source=cr&utm_medium=en#sthash.zj8oEKo3.dpuf
Bob Jensen's threads on why grade inflation is the biggest disgrace in
higher education and why the primary cause is the role teaching evaluations play
in performance evaluations, promotion, and tenure ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
"When Students Rate Teachers, Standards Drop Why do colleges tie academic
careers to winning the approval of teenagers? Something is seriously amiss," by
Lyell Asher, The Wall Street Journal, October 27, 2013 ---
http://online.wsj.com/news/articles/SB10001424052702304176904579115971990673400?mod=djemEditorialPage_h
These are reasonable questions, and professors
often benefit from what their students say. Professors don't simply inspect.
They teach, and it's helpful to know how things might have gone better from
the students' point of view. The problem is that, for the vast majority of
colleges and universities, student opinion is the only means by which
administrators evaluate teaching. How demanding the course was—how hard it
pushed students to develop their minds, expand their imaginations, and
refine their understanding of complexity and beauty—is largely invisible to
the one mechanism that is supposed to measure quality.
It would be one thing if student evaluations did no
harm: then they'd be the equivalent of a thermometer on the fritz —a
nuisance, but incapable of making things worse. Evaluations do make things
worse, though, by encouraging professors to be less rigorous in grading and
less demanding in their requirements. That's because for any given course,
easing up on demands and raising grades will get you better reviews at the
end.
How much better? It's hard to say. But it isn't as
if most teachers are consciously calculating the grade-to-evaluation
exchange rate anyway. Lenient grading is always the path of least resistance
with or without student reviews: Fewer students show up in your office if
you tell them everything is OK, and essays can be graded in half the time if
you pretend they're twice as good.
There's also a natural tendency to avoid delivering
bad news if you don't have to. So the prospect of end-of-term student
reviews, which are increasingly tied to job security and salary increases,
is another current of upward pressure on professors to relax standards.
There is no downward pressure. College
administrators have little interest in solving or even acknowledging the
problem. They're focused on student retention and graduation rates, both of
which they assume might suffer if the college required more of its students.
Meanwhile, studies show that the average
undergraduate is down to 12 hours of coursework per week outside the
classroom, even as grades continue to rise. One of these studies,
"Academically Adrift" (2011) by sociologists Richard Arum and Josipa Roksa,
suggests a couple of steps that could help remedy the problem: "high
expectations for students and increased academic requirements in syllabi . .
. coupled with rigorous grading standards that encourage students to spend
more time studying."
Colleges can change this culture, in other words,
without spending a dime. The first thing they can do is adopt a version of
the Hippocratic oath: Stop doing harm. Stop encouraging low standards with
student evaluations that largely ignore academic rigor and difficulty.
Reward faculty for expecting more of students, for pushing them out of their
comfort zone and for requiring them to put academics back at the center of
college life.
Accrediting agencies could initiate this reform,
but they too would first have to stop doing harm. They would have to
acknowledge, for example, that since "learning outcomes" are calculated by
professors in the exact same way that grades are, it's a distinction without
a difference, save for the uptick in pseudo-technical jargon.
Then the accrediting agencies should insist that
colleges take concrete steps to make courses more uniformly demanding across
the board, and to decouple faculty wages and job security from student
opinion. The latter is an especially critical issue now, given the increase
in adjuncts and part-time faculty, whose job security often hangs by the
thread of student reviews.
President Obama's plan for higher education,
released in August, does not inspire confidence that this or any other issue
related to educational quality will become a central concern. On the
contrary, his emphasis on degree completion through "accelerated learning
opportunities," online courses, credit for "prior learning" and the like is
a recipe for making things worse. Pressing colleges to increase graduation
rates is every bit as shortsighted as it was to encourage banks to increase
mortgage-approval rates.
But if that's what the president wants to do, he
can rest assured that colleges and universities have an incentive structure
already in place to make it easier for students to get the degree they want,
rather than the education they need.
Mr. Asher is an associate professor of English at Lewis and Clark
College.
Jensen Comment
The biggest disgrace in education over the past five decades is grade inflation,
and in my opinion teaching evaluations are the primary cause. In the above
article Professor Asher states his opinions. For harder evidence (such as the
study at Duke) go to:
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
The easy grading problem, in my viewpoint, is mainly caused when schools rely
mostly on required student evaluations for teaching evaluations in general. It
was much different when required student evaluations were only seen by the
instructors themselves.
I might add that the college-required evaluations are only part of the cause
of easy grading. What has become a huge factor is the Rate My Professors Website
where over a million students have sent in evaluations of their instructors. The
praises and criticisms of instructors are now available for the world to view.
Easy graders tend to get higher evaluations, although this is not always the
case. Tough graders as a rule get hammered ---
http://www.ratemyprofessors.com/
Hence even if a school reverts to the old system where only instructors see
student evaluations, some of those students will likely post their praises and
criticisms at the above link. This is especially problematic since only a small
nonrandom subset of every instructor's students send their evaluations to the
above link.
Harvard Undergraduate Grades:
Mode = A
Median = A-
In the 1940s both the mode and the median grade was C (the historic average
performance grade).
Question 1
What is the most likely explanation of why the median and mode are unequal?
Hint:
Think variance and kurtosis when the A grade is also an upper bound.
Question 2
Nearly 70 undergraduates at Harvard were recently expelled for cheating in a
government course that gives every student an A grade for completing the course.
This begs the question of why so many students cheated when they were assured of
getting a top grade without having to cheat?
Answer to Question 2
The investigation revealed that most of the cheaters were just too lazy to do
the writing assignment even though everybody who submitted a paper would get
the same top grade. The students who were expelled all plagiarized the same
parts of the paper that, when you think about it,made the detection of
plagiarism inevitable if a grader actually read each paper.
"Most Frequently Awarded Grade at Harvard: A," Inside Higher Ed,
December 4, 2013 ---
http://www.insidehighered.com/quicktakes/2013/12/04/most-frequently-awarded-grade-harvard
The most frequently awarded grade for
undergraduates at Harvard University is an A, and the median grade is A-.
University officials released those facts Tuesday at a meeting of arts and
sciences faculty members, and a Harvard spokesman confirmed the information
Tuesday night. The spokesman cautioned in an email against too much emphasis
on the grade data. "We believe that learning is the most important thing
that happens in our classrooms and throughout our system of residential
education. The faculty are focused on creating positive and lasting learning
outcomes for our undergraduates," he said. "We watch and review trends in
grading across Harvard College, but we are most interested in helping our
students learn and learn well."
Some Harvard faculty members are concerned,
however, about grade inflation. Harvey Mansfield, who has repeatedly raised
the issue, was the one who brought it up with questions at Tuesday's
meeting. He told The Boston Globe that he thought grading patterns were
"really indefensible."
Jensen Comment
The number I recall the most is that over 80% of Harvard's graduates graduate
cum laude.
Who does this hurt the most?
It probably hurts the top 10% of the Harvard Graduates who are not designated as
performing better than the other 60% of the cum laude graduates. If 1,000
cum laude graduates apply for medical school recruiters essentially have
to ignore Harvard grade averages in favor of other criteria like GRE scores.
You've got to love the curmudgeon political science professor at Harvard who
assigns a transcript grade (almost always an A grades) and a private grade that
only each student sees showing what Professor Mansfield thinks the student
actually earned if there were not such an epidemic of grade inflation ---
http://en.wikipedia.org/wiki/Harvey_Mansfield
His somewhat weak excuse is that students who take his sections of a course
should not be penalized relative to their alternatives for earning A grades in
other sections. But does not want most of them leaving his courses thinking that
they were nearly perfect.
Unlike Harvard, Princeton University has been
making a more concerted effort to lower the mode and median grades in most
courses.
Bob Jensen's threads on the major cause of
grade inflation across the USA (different colleges and universities compared)
---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
F**k Up That Professor Mansfield!
Larry Summers President of Harvard University before
he became chief economic advisor to President Obama
"White House economist: 'F--- up' conservative prof 'I was astounded that the
president of Harvard would stoop to such tactics'," WorldNetDaily,
December 6, 2009 ---
http://www.wnd.com/index.php?fa=PAGE.view&pageId=118187
According to a university
colleague, former president of Harvard and current White House economist
Larry Summers once asked for help to "f--- up" one of the school's
conservative professors.
Summers' colleague, Cornel
West, is a radical race relations instructor who is now a professor at
Princeton after departing Harvard in the wake of a dispute with Summers.
Obama named West, whom he has called a personal friend, to the Black
Advisory Council of his presidential campaign. West was a key point man
between Obama's campaign and the black community.
In his recently released
memoirs, "Brother West: Living and Loving Out Loud," West claims that
Summers invited West into his office and asked him to help undermine Harvard
government professor Harvey Mansfield, who had professed conservative views.
"Help me f--- him up,"
Summers reportedly said to West without explaining further.
West writes, "For my part, I
was astounded that the President of Harvard would stoop to such tactics."
West further related the details of the alleged
encounter in a recent interview with Amy Goodman, host of the far-left
Democracy Now
Internet
television network.
Said West: "And as soon as I
walked into the office, [Summers] starts using profanity about Harvey
Mansfield. I said, 'No, Harvey Mansfield is conservative, sometimes
reactionary, but he's my dear brother.' We had just had debates at Harvard.
Twelve hundred people showed up. He was against affirmative action; I was
for it. That was fine. Harvey Mansfield and I go off and have a drink after,
because we have a respect, but deep, deep philosophical and ideological
disagreement. He was using profanity, so I had to defend Harvey Mansfield."
"Wait, so you're saying
Lawrence Summers was using profanity?" Goodman asked.
Continued West: "Larry
Summers using profanity about, you know, 'help me 'F' so and so up.' No, I
don't function like that. Maybe he thought that just as a black man, I like
to use profanity. I'm not a puritan. I don't use it myself. I have partners
who do."
In response to West's
claimed meeting with Summers, Mansfield told WND, "Larry Summers was not out
to get me."
"I was not present at the
famous interview between him and Cornel West, but in my opinion (Summers)
merely used my name in a clumsy attempt to cajole Cornel West into behaving
more like a professor, less like a celebrity," said Mansfield.
"Larry Summers was doing
many good things at Harvard before his enemies there succeeded in ousting
him," Mansfield added.
Neither Summers nor West immediately returned WND
e-mail and
phone requests for comment.
Mansfield is well-known
for his opposition to grade inflation at Harvard,
which he has publicly blamed in part on affirmative action. His views led to
student protests and a well-attended debate with West.
Mansfield also defended President Bush's use of
executive powers and has been
criticized by some leading feminists for his views on gender roles. He has
made statements that men and women have some different societal roles and
wrote a book, "Manliness," in which he bemoaned the loss of the virtue of
"manliness" in a "gender neutral" society.
Summers, meanwhile,
continues to teach at Harvard but lost his position as president in part
after a public feud in which West accused him of racism. Summers serves as
director of the White House's National Economic Council.
West served as an adviser on
Louis Farrakhan's Million Man March and is a personal friend of Farrakhan.
He authored two books on race with Henry Louis Gates Jr., who last summer
was at the center of controversy after Obama remarked on the Harvard
professor's arrest.
Continued in article
"Sociology and Other 'Meathead' Majors: Archie Bunker was right to
be skeptical of his son-in-law's opinions," by Harvey Mansfield,
The Wall Street Journal, May 31, 2011 ---
http://online.wsj.com/article/SB10001424052702304520804576345632061434312.html?_nocache=1306940719500#&mg=com-wsj
College campuses display a
striking uniformity of thought
Harvard professor Harvey Mansfield once famously
advised a conservative colleague to wait until he had tenure and only then to
"hoist the Jolly Roger." But few professors are getting around to hoisting the
Jolly Roger at all. Either they don't have a viewpoint that is different from
their colleagues, or they've decided that if they are going to remain at one
place for several decades, they'd rather just get along. Is tenure to blame for
the unanimity of thinking in American universities? It's hard to tell. But
shouldn't the burden of proof be on the people who want jobs for life?
Naomi Schafer Riley, "Tenure and Academic Freedom: College campuses display a
striking uniformity of thought," The Wall Street Journal, June 23, 2009
---
http://online.wsj.com/article/SB124571593663539265.html#mod=djemEditorialPage
Some are the most prestigious
universities in the USA
"13 Schools Where It's Almost Impossible To Fail," by Max Rosenberg and
Lynne Guey, Business Insider, May 29, 2013 ---
http://www.businessinsider.com/13-schools-where-its-really-hard-to-fail-2013-5
UC Berkeley Business School's Effort to Hold Back the Tide of Grade Inflation
Appears to Have Failed
"Higher Grades for Haas Undergrads," by Louis Lavelle, Bloomberg
Businessweek, May 13, 2013
http://www.businessweek.com/articles/2013-05-13/higher-grades-for-haas-undergrads
Two years after instituting grading caps for
undergraduate business students, the
Haas School of Business at the University of
California, Berkeley is relaxing its unpopular policy, making it possible
for students to earn higher grades.
In 2011, the school capped the mean GPA at 3.2 for
core classes and 3.4 for electives. Effective May 3, the
caps
have been raised to 3.4 for core classes and 3.6
for electives, according to the Daily Californian, the UC-Berkeley
student newspaper.
Haas says the
new cap for core classes “more closely reflects
the historical mean.” The goal of the new caps is to “establish clear and
consistent academic standards” across degree programs and multiple sections
of the same course, and “to encourage students to come to class, and to come
to class prepared.”
After Haas
scrapped its grading curve in 2011, the caps put
in place were not popular with students. Tyler Wishnoff, president of the
Haas Business School Association, said those caps left many students feeling
that it was too difficult to get the grades they thought they deserved and
may put them at a disadvantage when competing for jobs with graduates of
schools without such a policy. Some students felt there was little point in
trying hard for mediocre grades.
“There was definitely a lot of mixed feelings about
the caps,” Wishnoff says. “There was a perception that it was just too hard
to do well. … I definitely talked to students who stopped trying because the
policy was too oppressive.”
The new policy, Wishnoff says, is a big
improvement, giving faculty the flexibility they need to award grades that
accurately reflect a student’s performance. The new policy—while it won’t be
retroactive, as some students had wanted—is fair and maintains the school’s
academic rigor, he says.
Jensen Comment
The biggest disgrace, in my opinion, in higher education has been grade
inflation where the media grades have crawled upward with the cause, again in my
opinion, being the changed role of student evaluations in the virtually every
college's faculty decisions regarding tenure, promotion, and pay ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Possible Texas Law: Include class average grade alongside each
student's transcript grade
A grade of A no longer looks so good if the average grade for the class was a
grade of A
A grade of B is shown to be below average
"Higher Education Revalued," by Thomas K. Lindsay, Education News,
April 16, 2013 ---
http://educationviews.org/higher-education-revalued/
Thank you Chuck Pier for the heads up.
Grade inflation is real, rampant, and ravaging a
university near you. It would be a scandal if more people knew about it.
A bill filed in March in the Texas legislature
looks to ensure that more do. Called “Honest Transcript,” it is a model of
brevity, at only a little more than 300 words. Yet its sponsors expect it to
shake up higher education in the state and beyond. They believe that when
the public gets wind of higher education’s widespread grade-inflating
practices, it will put a stop to them. Others, less hopeful, think that
public transparency will merely reveal public indifference.
The bill would require all public colleges and
universities to include on student transcripts, alongside the individual
student’s grade, the average grade for the entire class. This would help
potential employers determine whether a high grade-point average signified
talent and achievement or merely revealed that the student had taken easy
courses.
The Honest Transcript bill was introduced in the
Texas house by Republican Scott Turner, a freshman representative and former
NFL cornerback (Redskins, Chargers, Broncos), and in the state senate by
veteran Republican Dan Patrick. Supporters argue that its modest
transparency requirement would show how grade inflation has severely
degraded the significance of college degrees.
A half-century of grade inflation has been
demonstrated repeatedly by national studies. Today, an A is the most common
grade given in college — 43 percent of all grades, as opposed to 15 percent
in the 1960s, according to Stuart Rojstaczer, formerly of Duke, and
Christopher Healy, of Furman, who conducted a 50-year survey of grading.
Arthur Levine, president of the Woodrow Wilson National Fellowship
Foundation, has also studied the trajectory of college grades. He finds that
in 1969, 7 percent of two- and four-year college students said their GPA was
an A-minus or higher; by 2009, 41 percent of students did. Having been
either a college student, a professor, or an administrator for nearly 30
years, I am not surprised by such findings. Nor, I suspect, is anyone else
in the academy. And neither are employers. People who make hiring decisions
here in Texas complain to me that grade inflation makes it virtually
impossible to rank job applicants accurately, because nearly all have A or B
averages.
It gets worse. A 2011 national study published as
the book Academically Adrift, by Richard Arum and Josipa Roksa, found that
our puffed-up prodigies are learning much too little. Thirty-six percent of
the students it surveyed show little or no increase in their ability for
critical thinking, complex reasoning, and clear writing after four years of
college. Small wonder that employers are frustrated, with the annual parade
of impressive transcripts hiding empty heads.
Employer concerns notwithstanding, universities
have a higher calling than simply preparing future workers. Almost all of
them proclaim in their mission statements that they seek to enhance their
students’ capacity for independent thought. In undermining this, their
noblest calling (which harkens back to Socrates’ declaration that “the
unexamined life is not worth living”), grade inflation is especially
harmful: It eats away at the essence and morale of an academic institution.
For Rojstaczer and Healy, “when college students perceive that the average
grade in a class will be an A, they do not try to excel. It is likely that
the decline in student study hours, student engagement, and literacy are
partly the result of diminished academic expectations.”
This, then, is the academic reality whose veil the
bill would lift: Too many students are learning too little, yet their grades
have never been so high.
Will Texas universities oppose transcript
transparency? It’s hard to imagine a principled basis for resistance, since
universities are defined by the pursuit of knowledge and its dissemination
to students and the larger society. Nevertheless, one university has
complained to Representative Turner that the bill would create “processing
difficulties in the Registrar’s office.”
This objection comes too late, for such
“processing” is now the norm. Recently, through services such as MyEdu.com
and internal school websites, students have been able to sift through the
grading histories of professors. MyEdu proclaims that it “works directly
with universities to post their official grade records, including average
GPA and drop rates. Yes, really — these are the official grade records
straight from your university.” It boasts a membership of over 800 schools
and more than 5 million students. Its reach in Texas extends to nearly every
public college and university.
Continued in article
Jensen Comment
I would prefer that the "average grade" be computed as the median grade since a
few low grades could skew the mean downward.
Especially note the grade inflation
graphs at
www.Gradeinflation.com
Valen E. Johnson, a biostatistics professor at the
University of Michigan and author of "Grade Inflation: A Crisis in College
Education" (Springer Verlag), said the use of student ratings to evaluate
teachers also inflates grades: "As long as our evaluations depend on their
opinion of us, their grades are going to be high."
Links to several formal studies if the impact of teaching evaluations on grade
inflation ---
The investigation revealed that 91 percent of
Harvard's students graduated cum laude.
Thomas Bartlett and Paula Wasley, "Just Say 'A': Grade Inflation Undergoes Reality Check:
The notion of a decline in standards draws crusaders and skeptics," Chronicle of Higher Education, September 5,
2008 ---
http://chronicle.com/weekly/v55/i02/02a00104.htm?utm_source=wb&utm_medium=en
It is also commonly said that grade inflation is by
far the worst in Ivy League schools. This isn't exactly correct, either. I
discuss this issue at length in our recently finished research paper on college
grading in America. It's beyond the scope of this web post to examine this issue
except to note that while grades are rising for all schools, the average GPA of
a school has been strongly dependent on its selectivity since the 1980s. Highly
selective schools had an average GPA of 3.43 if they were private and 3.22 if
they were public as of 2006. Schools with average selectivity had a GPA of 3.11
if they were private and 2.98 if they were public
Stuart Rojstaczer, GradeInflation.com ---
www.Gradeinflation.com
College students are not as intelligent
Where as college grades are being inflated, intelligence of students in college
is being deflated with rising numbers of college admissions. A much larger
fraction of the population attends college now, with resulting decline of
average cognitive ability.
"College students are not as intelligent" ---
http://scienceblogs.com/gnxp/2009/09/college_students_are_not_as_in.php
Professors read student comments on
RateMyProfessors.com and now it's their turn to strike back on video
Watch their rebuttals on video ---
http://video.ratemyprofessors.com/
Note that some of these videos are chopped up into
segments, so don't assume the video is over until it's over.
It appears to me that the instructors who are willing to post video rebuttals
are probably more self assured and probably receive higher ratings by students
than many of the lower-rated professors who do not strike back. Keep in mind
that both student evaluations and instructor rebuttals at this site are
self-selecting and often the students who supply evaluations in a given course
are only a small proportion of the students in the course. Outliers well above
and below the mean of satisfaction tend to be the respondents for a give
professor.
Some of the links below may now be broken.
RateMyProfessor now claims to have archived evaluations of over 1 million
professors from 6,000 schools based on over 6 million submitted evaluations from
students.
The proportions of students who submitted evaluations are self selecting and
miniscule compared to the number of students taught by each professor. Also the
outliers tend to respond more than the silent majority. For example, sometimes
the overall evaluations are based on only 1-10 self selecting (often
disgruntled) students among possibly hundreds taught over the years by an
instructor.
The controversial RateMyProfessor site now links to Facebook entries for
professors
Our new Facebook app lets you to search for, browse and read
ratings of professors and schools. Find out
which professor will inspire you, challenge
you, or which
will just give you the easy A.
RateMyProfessor ---
http://www.ratemyprofessors.com/index.jsp
Probably the most widespread scandal in higher education is grade
inflation. Much of this can be attributed to required (by the university) and
voluntary (RateMyProfessor) evaluations of instructors by students ---
http://faculty.trinity.edu/rjensen/Assess.htm#GradeInflation
Teaching evaluations are a dysfunctional because they are a major cause of
the disgrace in education --- monumental grade inflation across the USA ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
""The Unnecessary Agony of Student Evaluations," by Spurgeon Thompson,
Chronicle of Higher Education, March 1, 2013 ---
http://chronicle.com/blogs/conversation/2013/03/01/the-unnecessary-agony-of-student-evaluations/?cid=wb&utm_source=wb&utm_medium=en
Student evaluations can be either the most painful
or falsely ego-boosting things we faculty members read. Sadly, they’re
becoming more and more important as American universities veer toward
private-enterprise models of educational management. Based on the concept of
the customer survey, they have been taken public by a range of Web sites,
most famously Rate My Professors.
Now that I’ve returned from a decade teaching in
Europe, where the culture around student evaluations is entirely different,
it has been eye-opening, if not alarming, to witness American higher
education’s shifts toward consumerist assumptions. The impulse behind this
shift is understandable. We’ve all done it—written a negative review of a
product we were unhappy with on Amazon, or complained about a bad experience
with an airline that lost our bag or a hotel whose bedsheets weren’t
changed.
There’s a certain liberating power that comes with
such ratings, a sense of “I’ll get them for what they did to me” or “I’ll
reward them for that extra effort they made.” The problem is when we mistake
our money for power, as if buying a service gives us control over its
manufacture or production. It doesn’t. “Consumer power” is a myth invented
to get us to buy more.
But university students aren’t strictly consumers
purchasing a product. To understand why not, try this thought experiment. If
the Apple Store made us apply to buy a new iPhone, and accepted only, say,
30 percent of us who wanted to buy one, and then told us we had to study and
master the phone’s operation manual for several years before we could
actually hold it in our hands—and even then only three-quarters of us would
actually get a phone—would we still regard ourselves as customers after all
those years? We would be something else. We would resemble more those
“pre-employees” we hear about who have to pay for their own training than we
would customers just buying things.
But there are several more basic reasons why
students are not customers. First, most of them have been forced by law to
attend school for 12 years before they arrive in a college classroom. If
they went to public schools, they did not buy that schooling. In any case,
they had no power over whether they went to school or not (even
home-schooling is regulated). And when they enter universities, students are
so conditioned by the feelings involved in being forcibly educated that they
can hardly be said to feel free. (I don’t mean about which university they
are in; I mean about being in a university at all.)
To say that an American university student has
freely chosen to be educated is a bit like saying they have freely chosen to
buy food to eat.
Further, they are graded. Customers are not. With
rampant grade inflation in this country, effectively students are told
whether they are suitable or not (given an A or a B). Over and over, they
are told whether they are good enough to continue being told whether they
are good enough. Even the most rigorous professions don’t require the kind
of extensive, multifaceted performance-review structure that five graded
courses a semester constitutes.
Now, to be asked to evaluate the performance of the
person evaluating yours—that is psychologically complex. In the business
world, my friends tell me that this is called the “360-degree performance
review,” where bosses evaluate employees, and employees, in turn, evaluate
bosses. But eventually, of course, you run out of bosses, and the “circle”
closes. In universities, it doesn’t work that way. Students come and go, and
professors generally remain.
In Europe generally, where universities are mostly
free (though increasingly less so) and very difficult to get into, students
are regarded not as consumers but as subjects needing either training or
enlightenment. The life of the mind is valued and nurtured, or,
alternatively, technical skills are passed on, depending on what kind of
institution you go to. Grades are nowhere near as inflated, and student
evaluations are regarded as formalities, like a form filled out for
bureaucrats. Value is placed not on how students regard their professors but
how professors regard their colleagues.
Teachers should evaluate the teaching skills of
other teachers, regularly, as part of life, as part of what we do in our
classrooms. Leaving it to amateurs doesn’t make sense. Leaving it to
students is almost absurd.
I liked European attitudes toward student
evaluations. But I wouldn’t want to live with them. They were dismayingly
unhelpful. Still, students are not customers, and professors are not service
providers. American universities use the myth of consumer power to sell
themselves. Few professors are fired because of student evaluations—except
those who are most vulnerable, that is, adjuncts at the very lowest rungs of
the academic industry.
But all of us internalize the responses we get;
we’re told to be tough inside when they are negative. We somehow believe
them, as if they are truths objectively obtained. Students once ourselves,
we hunger for grades and approval. Regardless of how many times our
colleagues tell us not to worry over the bad evaluations, and not to let the
good ones go to our heads, we are still very much students inside, seeking
grades.
Continued in article
A Debate by Experts About Teaching Evaluations
"Professors and the Students Who Grade Them," The New York Times,
September 17, 2012 ---
http://www.nytimes.com/roomfordebate/2012/09/17/professors-and-the-students-who-grade-them?hp
Jensen Comment
One of the experts is a man after my own heart:
Stuart Rojstaczer, a former professor of geology and civil engineering at
Duke University, is the creator of of the
Grade Inflation Web site. He is writing a book about undergraduate education
in the U.S.
Grade inflation is, in my opinion, the Number One disgrace in higher
education, and the major cause of grade inflation is the teaching evaluation
process where students impact the promotion, tenure, and salary outcomes of
their teachers.
The Demise of the Top Military Academies in the USA
"The Few, the Proud, the Infantilized," by Bruce Fleming, Chronicle of
Higher Education, October 6, 2012 ---
http://chronicle.com/article/The-Few-the-Proud-the/134830/
The U.S. military-service academies—at West Point
(Army), Annapolis (Navy), Colorado Springs (Air Force), and New London
(Coast Guard)—are at the center of several debates, both military and
civilian. The military is downsizing, and the federal budget is under
scrutiny: Do the academies deserve to continue?
They're educational institutions, but do they
actually educate, and furthermore, do they produce "leaders" as they claim
to? And are they worth the $400,000 or so per graduate (depending on the
academy) they cost taxpayers?
After all, we already have a federal program that
produces officers—an average of twice as many as those who go to the
academies (three times for the Army)—at a quarter of the cost. That program
is ROTC, the Reserve Officer Training Corps, which has expanded considerably
since World War II, when the academies produced the lion's share of
officers.
No data suggest that ROTC officers are of worse
quality than those graduating from the academies, who are frequently
perceived by enlisted military as arrogant "ring-knockers" (after their
massive old-style class rings). The academies evoke their glory days by
insisting that many more admirals, say, come from Annapolis than from ROTC.
But that is no longer true. Between 1972 and 1990 (these are the latest
figures available), the percentage of admirals from ROTC climbed from 5
percent to 41 percent, and a 2006 study indicated that commissioning sources
were not heavily weighted in deciding who makes admiral.
Another officer-production pipeline is Officer
Candidate School, which is about as large a source of officers as the
academies. It gives a six- to 12-week training course for mature enlistees
and college graduates who paid for their educations on their own (that is,
did not participate in ROTC), and it costs taxpayers almost nothing. It
could be expanded by pitching it to college students who might want to
become officers when they graduate.
So the service academies are no longer
indispensable for producing officers. Their graduates now make up only about
20 percent of the officer corps in any given year. It's clear that we don't
need the academies in their current form—versions of a kind of military
Disneyland. These institutions do produce some fine officers, even some
leaders. But the students I respect the most tell me that those who succeed
do so despite the institutions, not because of them.
The best midshipmen—and, as I know through
conversations and written correspondence, the best students at the other
service academies—are deeply angry, disillusioned, and frustrated. They
thought the academies would be a combination of an Ivy League university and
a commando school. They typically find that they are neither.
Most of what the Naval Academy's PR machine
disseminates is nonsense, as midshipmen quickly realize, which diminishes
their respect for authority. We announce that they're the "best and
brightest" and then recruit students who would be rejected from even average
colleges, sending them, at taxpayer expense, to our one-year Naval Academy
Prepatory School. (About a quarter of recent entering classes over the last
decade or so has SAT scores below 600, some in the 400s and even 300s.
Twenty percent of the class needs a remedial pre-college year.)
The academies do have a handful of honors programs,
and their engineering programs are nationally ranked. But for the most part,
academics are lackluster despite an intense focus on grades. Although free
time is granted or withheld based on GPA, an atmosphere exists in which
studying isn't "cool," and freshmen, or plebes, aren't allowed to take the
afternoon naps that would allow them stay awake in class. (Sleep deprivation
is used to "teach" students how to stay awake on the job—except there is no
evidence that working while sleep-deprived is something you can get better
at.)
The academies' focus on physicality is largely lip
service as well. We claim to promote fitness but then refuse to throw out
students who repeatedly fail to pass physical tests. Gone are the days of
"shape up or ship out": Nowadays we "remediate."
We also claim that students are "held to a higher
moral standard," which suggests zero or low tolerance of wrongdoing. But the
current emphasis on reducing attrition means that, as many midshipmen have
told me, students get one "freebie," such as a DUI. Held to a higher moral
standard? The students know that's a joke.
What else justifies our existence? Our most
consistent justification is that we teach "leadership." We even make
students take classes in the subject. Midshipmen roll their eyes. Leadership
can't be taught, it can only be modeled.
The central paradox of the service academies is
that we attract hard-charging "alpha" types and then make all their
decisions for them. Students are told when to study and when to work out,
whom they can date (nobody in their company), and when they can wear
civilian clothes. All students must attend football games and cheer, and go
to evening lectures and cultural events (where many sleep in their seats).
The list goes on.
The academies are the ultimate nanny state. "When
are they going to let me make some decisions?" one student asked in
frustration. "The day I graduate?" This infantilization turns students
passive-aggressive, and many of them count the years, months, and days until
they can leave.
Decades of talking with students at the Naval
Academy have convinced me that most dislike academic work because it is one
more thing the students have to do. Why should they be interested? They're
not paying for it. And Daddy isn't either, at least not more than any other
taxpayer.
The military side of things suffers, too.
Inspections are announced and called off at the last minute, or done
sloppily. After all, everything is make-believe. Students aren't motivated
to take care of their own uniforms or abide by the rules because they
realize it's all just for show. Administrators want to make sure nobody gets
hurt to avoid negative publicity, and as a result students are not pushed to
their limits. They resent it. They come expecting Parris Island, but they
get national historic landmarks where tourists come to feel proud of
nice-looking young people.
Is there anything good about the academies?
Absolutely: the students, by and large. You won't find a more focused,
eager-for-a-challenge, desperate-to-make-a-difference group of young adults
(whom we proceed to infantilize) anywhere. Some catch on quickly about the
hype and don't let it bother them. They pragmatically view the academy as a
taxpayer-supported means to an end they desperately want. And we have some
bright students: About a quarter of entering freshmen have SAT scores above
700 with grades to match (but that is a far smaller proportion of high
scorers than at the Ivies).
A handful are high performers. One of my students
last year was a varsity swimmer, an English honors graduate in the top 5
percent of his class, and the "honor man" (single best performer) in his
SEAL class at the famously brutal Basic Underwater Demolition training. That
is gorgeous stuff, the ultimate combination of brains and brawn the
academies say they produce. But how rare at Annapolis!—or indeed, anywhere.
Another of my students, a systems-engineering
major, was in the top 1 percent of his class and is now doing graduate work
at the University of Oxford. He also won, as a sophomore, a competition
sponsored by Harvard's Kennedy School for his essay on how to filter out
arsenic from Ganges Delta water by running it through fern leaves. At the
reception given after his lecture, he was too young to drink the chardonnay.
The following weekend he returned to Boston to run the Boston Marathon with
the Naval Academy team. It's true, America: The service academies really can
enroll outstanding students. But such students are the exception.
Whose fault is this generally disappointing state
of affairs? Partly it's the gravitational pull of history. The service
academies are relics of the 19th century. (Exception: The Air Force was
split off from the Army after World War II and got its stand-alone academy
as a postscript in l954.) At the time, they clearly represented progress.
War had become more technical, and soldiers-in-training needed a technical
education that colleges still largely devoted to Greek, Latin, and religion
were unequipped to provide.
But the world has changed. Now most reputable
colleges offer technical courses, and top-tier colleges and universities
already produce many of our officers and leaders. At the same time, the
academies have become more like civilian colleges, albeit rather strange
ones. We now give a bachelor of science (to all majors, including English
and history) rather than a certificate for a standard course of study as we
initially did. Students walk to class rather than march; women were accepted
starting in 1976; going to chapel is no longer mandatory. And now, of
course, we enroll openly gay students.
Continued in article
Jensen Comment
Bruce Fleming is a professor of English at the U.S. Naval Academy. He is the
author, most recently, of Bridging the Military-Civilian Divide (Potomac
Books, 2010). I wonder how much of his criticism of the military academies
extends to virtually all colleges and universities in the USA. My guess is that
in that context the military academy demise is not so unique.
You can read about what some of Bruce Feming's Naval Academy students say
about him on RateMyProfessor.com. Please note that in general over one million
RMP submissions about their college professors are not random samples. I totally
disregard the numerical ratings of any professor, but I do find some of the
subjective comments somewhat revealing. Unlike so many college professors these
days, Professor Fleming appears to be a hard grader ---
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=395876
The Number 1 disgrace, apart from increasing felonies like rape tolerances,
is grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
"U.S. 11 Former Atlanta Educators Convicted in Cheating
Scandal," by Kate Brumback, Time Magazine, April 1, 2015 ---
http://time.com/3767734/atlanta-cheating-scandal/?xid=newsletter-brief
In one of the biggest cheating scandals of its kind
in the U.S., 11 former Atlanta public school educators were convicted
Wednesday of racketeering for their role in a scheme to inflate students’
scores on standardized exams. More 500 Unaccounted For After Dozens Shot at
College in Kenya NBC NewsDiplomacy Until Dawn: Kerry, Zarif Burn Midnight
Oil NBC NewsTornado Threat Looms in Hail-Lashed Midwest, Plains NBC
NewsThese Are 20 Of The World's Best Photos Taken With Cell Phones
Huffington PostRichard Paul Evans: How I Saved My Marriage Huffington Post
The defendants, including teachers, a principal and
other administrators, were accused of falsifying test results to collect
bonuses or keep their jobs in the 50,000-student Atlanta school system. A
12th defendant, a teacher, was acquitted of all charges by the jury. Popular
Among Subscribers Star Track: Amy Schumer’s movie Trainwreck Amy Schumer:
Class Clown of 2015 Subscribe Cuba Libre Bitter Pill: Why Medical Bills Are
Killing Us
The racketeering charges carry up to 20 years in
prison. Most of the defendants will be sentenced April 8.
“This is a huge story and absolutely the biggest
development in American education law since forever,” said University of
Georgia law professor Ron Carlson. “It has to send a message to educators
here and broadly across the nation. Playing with student test scores is
very, very dangerous business.”
A state investigation found that as far back as
2005, educators fed answers to students or erased and changed answers on
tests after they were turned in. Evidence of cheating was found in 44
schools with nearly 180 educators involved, and teachers who tried to report
it were threatened with retaliation.
Similar cheating scandals have erupted in
Philadelphia, Washington, D.C., Nevada and other public school systems
around the country in recent years, as officials link scores to school
funding and staff bonuses and vow to close schools that perform poorly.
Thirty-five Atlanta educators in all were indicted
in 2013 on charges including racketeering, making false statements and
theft. Many pleaded guilty, and some testified at the trial.
Former Atlanta School Superintendent Beverly Hall
was among those charged but never went to trial, arguing she was too sick.
She died a month ago of breast cancer.
Hall insisted she was innocent. But educators said
she was among higher-ups pressuring them to inflate students’ scores to show
gains in achievement and meet federal benchmarks that would unlock extra
funding.
Over objections from the defendants’ attorneys,
Superior Court Judge Jerry Baxter ordered all but one of those convicted
immediately jailed while they await sentencing. They were led out of court
in handcuffs.
“They are convicted felons as far as I’m
concerned,” Baxter said, later adding, “They have made their bed and they’re
going to have to lie in it.”
The only one allowed to remain free on bail was
teacher Shani Robinson, because she is expected to give birth soon.
Bob Rubin, the attorney for former elementary
school principal Dana Evans, said he was shocked by the judge’s decision and
called it “unnecessary and vindictive.”
Prosecutors said the 12 on trial were looking out
for themselves rather than the children’s education. Defense attorneys
accused prosecutors of overreaching in charging the educators under
racketeering laws usually employed against organized crime.
"Dishonest Educators," by Walter E. Williams, Townhall, January
9, 2013 ---
Click Here
http://townhall.com/columnists/walterewilliams/2013/01/09/dishonest-educators-n1482294?utm_source=thdaily&utm_medium=email&utm_campaign=nl
Nearly two years ago, U.S. News & World Report came
out with a story titled "Educators Implicated in Atlanta Cheating Scandal."
It reported that "for 10 years, hundreds of Atlanta public school teachers
and principals changed answers on state tests in one of the largest cheating
scandals in U.S. history." More than three-quarters of the 56 Atlanta
schools investigated had cheated on the National Assessment of Educational
Progress test, sometimes called the national report card. Cheating orders
came from school administrators and included brazen acts such as teachers
reading answers aloud during the test and erasing incorrect answers. One
teacher told a colleague, "I had to give your kids, or your students, the
answers because they're dumb as hell." Atlanta's not alone. There have been
investigations, reports and charges of teacher-assisted cheating in other
cities, such as Philadelphia, Houston, New York, Detroit, Baltimore, Los
Angeles and Washington.Recently, The Atlanta
Journal-Constitution's blog carried a story titled "A new cheating scandal:
Aspiring teachers hiring ringers." According to the story, for at least 15
years, teachers in Arkansas, Mississippi and Tennessee paid Clarence
Mumford, who's now under indictment, between $1,500 and $3,000 to send
someone else to take their Praxis exam, which is used for K-12 teacher
certification in 40 states. Sandra Stotsky, an education professor at the
University of Arkansas, said, "(Praxis I) is an easy test for anyone who has
completed high school but has nothing to do with college-level ability or
scores." She added, "The test is far too undemanding for a prospective
teacher. ... The fact that these people hired somebody to take an easy test
of their skills suggests that these prospective teachers were probably so
academically weak it is questionable whether they would have been suitable
teachers."
Here's a practice Praxis I math question: Which of
the following is equal to a quarter-million -- 40,000, 250,000, 2,500,000,
1/4,000,000 or 4/1,000,000? The test taker is asked to click on the correct
answer. A practice writing skills question is to identify the error in the
following sentence: "The club members agreed that each would contribute ten
days of voluntary work annually each year at the local hospital." The test
taker is supposed to point out that "annually each year" is redundant.
CNN broke this cheating story last July, but the
story hasn't gotten much national press since then. In an article for
NewsBusters, titled "Months-Old, Three-State Teacher Certification Test
Cheating Scandal Gets Major AP Story -- on a Slow News Weekend" (11/25/12),
Tom Blumer quotes speculation by the blog "educationrealist": "I will be
extremely surprised if it does not turn out that most if not all of the
teachers who bought themselves a test grade are black. (I am also betting
that the actual testers are white, but am not as certain. It just seems that
if black people were taking the test and guaranteeing passage, the fees
would be higher.)"
There's some basis in fact for the speculation that
it's mostly black teachers buying grades, and that includes former Steelers
wide receiver Cedrick Wilson, who's been indicted for fraud. According to a
study titled "Differences in Passing Rates on Praxis I Tests by
Race/Ethnicity Group" (March 2011), the percentages of blacks who passed the
Praxis I reading, writing and mathematics tests on their first try were 41,
44 and 37, respectively. For white test takers, the respective percentages
were 82, 80 and 78.
Continued in article
Jensen Commentary
It should be noted that the author (Walter Williams) of this article is an
African American economics professor at George Mason University..He's also
conservative, which is rare for an African American who grew up in an urban
ghetto. This makes him an endangered species in academe.
The cheating Atlanta Superintendent leader died two months ago
from breast cancer.
The cheating hurt thousands of students by denying them access to remedial
education while the cheating teachers and administrators got bigger bonuses.
Hundreds of other cheating teachers blamed administrators and plea bargained to
stay out of jail and keep their jobs
"Why Students Gripe About Grades," by Cathy Davidson, Inside Higher
Ed, January 7, 2013 ---
http://www.insidehighered.com/views/2013/01/07/essay-how-end-student-complaints-grades
The biggest scandal in education is nearly universal grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Bob Jensen's Fraud Updates ---
http://faculty.trinity.edu/rjensen/FraudUpdates.htm
Super Teacher Joe Hoyle Congratulates His Nine Intermediate Accounting II
Students Who Received an A Grade (9/52=17.2%) ---
http://joehoyle-teaching.blogspot.com/2012/05/congratulations.html
Congratulations!!!
I am sending this note to the nine students who
earned the grade of A this semester in Intermediate Accounting II. We
started the semester with 52 students but we only had nine (17.3 percent)
who earned the grade of A. And, you did – congratulations!! I very much
appreciate the effort that it took to excel in such a challenging class.
From the first day to the last, we pushed through some terribly complicated
material. We never let up, not for one day. And, you did the work that was
necessary. You didn’t let the challenge overwhelm you. I am so very proud of
you and pleased for you. More importantly, you should be proud of yourself.
I sincerely believe that all 52 of those students who started back in
January had the ability to make an A. But you nine made it happen. In life,
success comes from more than ability. It comes from taking on real
challenges and investing the time necessary to make good things happen. I
occasionally get frustrated that more students don’t set out to truly excel.
However, I cannot say that about you.
As I am sure you know (or remember), I always ask
the students who make an A in my class to write a short paragraph (well,
write a short paragraph directed to next fall’s students) and explain how
you did it. I find this is important. You nine understood what I wanted you
to do and you did it. So many students never catch on to what my goals are.
It is always helpful (I believe) when the A students one semester tell the
students before the next semester “Listen, everyone can make an A in this
class but you really have to do certain things.” What are those things?
I only ask two things: be serious and tell the
truth. There's really nothing more that I can ask of you.
And, write that paragraph for me before you forget.
Have a great summer. Work hard, learn a lot, see
the world, experience great things. There is plenty of time to be a boring
adult after you graduate. Open your mind and pour as much into it as you can
over the summer.
Congratulations again. It has been a genuine
pleasure to have had the chance to work with you.
Jensen Comment
Although we don't know the entire distribution of Joe's grades in this course,
it's nice to know that in this era of massive grade inflation the median grade
is not an A grade.
Bob Jensen's threads on the national disgrace of grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Question About Grade Inflation
Is college too easy?
"We Have Met The Enemy and He Is Us," by Joe Hoyle, Accounting
Education Blog, May 22, 2012 ---
http://joehoyle-teaching.blogspot.com/2012/05/we-have-met-enemy-and-he-is-us.html
"Debate at Minnesota Over Grade Inflation," Inside Higher Ed,
May 30, 2012 ---
http://www.insidehighered.com/quicktakes/2012/05/30/debate-minnesota-over-grade-inflation
Faculty members at the University of Minnesota-Twin
Cities are debating whether too many students are earning A grades,
The Star Tribune reported. One proposal under
consideration is that transcripts should indicate the share of each class
receiving a particular grade, so that an A might have less value in courses
in where many such grades are awarded.
Bob Jensen's threads on the national disgrace of grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
"Good and Bad Teachers: How to Tell the Difference," by Nobel Laureate
Gary Becker, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/good-and-bad-teachers-how-to-tell-the-difference-becker.html
"Rating Teachers," by Judge
Richard Posner, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/rating-teachersposner.html
"Innovations in Higher Education? Hah! College leaders need to move
beyond talking about transformation before it's too late," by Ann Kirschner,
Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
. . .
(Conclusion)
Some of the most interesting work begins in the academy but grows beyond it.
"Scale" is not an academic value—but it should be. Most measures of prestige
in higher education are based on exclusivity; the more prestigious the
college, the larger the percentage of applicants it turns away. Consider the
nonprofit Khan Academy, with its library of more than 3,000 education videos
and materials, where I finally learned just a little about calculus. In the
last 18 months, Khan had 41 million visits in the United States alone. It is
using the vast data from that audience to improve its platform and grow
still larger. TED, the nonprofit devoted to spreading ideas, just launched
TED-Ed, which uses university faculty from around the world to create
compelling videos on everything from "How Vast Is the Universe?" to "How
Pandemics Spread." Call it Khan Academy for grown-ups. The Stanford
University professor Sebastian Thrun's free course in artificial
intelligence drew 160,000 students in more than 190 countries. No surprise,
the venture capitalists have come a-calling, and they are backing
educational startups like Udemy and Udacity.
All of those are signposts to a future where
competency-based credentials may someday compete with a degree.
At this point, if you are affiliated with an Ivy
League institution, you'll be tempted to guffaw, harrumph, and otherwise
dismiss the idea that anyone would ever abandon your institution for such
ridiculous new pathways to learning. You're probably right. Most
institutions are not so lucky. How long will it take for change to affect
higher education in major ways? Just my crystal ball, but I would expect
that institutions without significant endowments will be forced to change by
2020. By 2025, the places left untouched will be few and far between.
Here's the saddest fact of all: It is those leading
private institutions that should be using their endowments and moral
authority to invest in new solutions and to proselytize for experimentation
and change, motivated not by survival but by the privilege of securing the
future of American higher education.
The stakes are high. "So let me put colleges and
universities on notice," President Obama said in his recent State of the
Union address. "If you can't stop tuition from going up, the funding you get
from taxpayers will go down." Because of the academy's inability to police
itself and improve graduation rates, and because student debt is an
expedient political issue, the Obama administration recently threatened to
tie colleges' eligibility for campus-based aid programs to institutions'
success in improving affordability and value for students.
Whether the president's threat is fair or not, it
will not transform higher education. Change only happens on the ground.
Despite all the reasons to be gloomy, however, there is room for optimism.
The American university, the place where new ideas are born and lives are
transformed, will eventually focus that lens of innovation upon itself. It's
just a matter of time.
Jensen Comment
This a long and important article for all educators to carefully read. Onsite
colleges have always served many purposes, but one purpose they never served is
to be knowledge fueling stations where students go to fill their tanks. At best
colleges put a shot glass of fuel in a tanks with unknown capacities.
Students go to an onsite college for many reasons other than to put fuel in
their knowledge tanks. The go to live and work in relatively safe transitional
environments between home and the mean streets. They go to mature, socialize, to
mate, drink, laugh, leap over hurdles societies place in front of career paths,
etc. The problem in the United States is that college onsite living and
education have become relatively expensive luxuries. Students must now make more
painful decisions as to how much to impoverish their parents and how deeply go
into debt.
I have a granddaughter 22 years old majoring in pharmacy (six year program).
She will pay off her student loans before she's 50 years old if she's lucky.
Some older students who've not been able to pay off their loans are becoming
worried that the Social Security Administration will garnish their retirement
Social Security monthly payments for unpaid student loans.
We've always known that colleges are not necessary places for learning and
scholarship. Until 43 years ago (when the Internet was born) private and public
libraries were pretty darn necessary for scholarship. Now the Internet provides
access to most known knowledge of the world. But becoming a scholar on the
Internet is relatively inefficient and overwhelming without the aid of
distillers of knowledge, which is where onsite and online college courses can
greatly add to efficiency of learning.
But college courses can be terribly disappointing as distillers of knowledge.
For one thing, grade inflation disgracefully watered down the amount of real
fuel in that shot glass of knowledge provided in a college course ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Grades rather than learning became the tickets to careers and graduate schools,
thereby, leading to street-smart cheating taking over for real learning
perspiration ---
http://faculty.trinity.edu/rjensen/Plagiarism.htm
When 80% of Harvard's graduating class graduates cum laude, we no
longer identify which graduates are were the best scholars in their class.
Soon those graduates from Harvard, Florida A&M University, Capella
University, and those who learned on their own from free courses, video
lectures, and course materials on the Web will all face some sort of common
examinations (written and oral) of their competencies in specialties.
Competency testing will be the great leveler much like licensure examinations
such as the Bar Exam, the CPA exam, the CFA exam, etc. are graded on the basis
of what you know rather than where you learned what you know. It won't really
matter whether you paid a fortune to learn Bessel Functions onsite at MIT or for
free from the MITx online certificate program ---
http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI
If you are an educator or are becoming an educator, please read:
"Innovations in Higher Education? Hah! College leaders need to move beyond
talking about transformation before it's too late," by Ann Kirschner,
Chronicle of Higher Education, April 8, 2012 ---
http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en
A Professor Asks Former Students to Pump Up His RateMyProfessor Scores
"UNC Law Prof Sends a ‘Rather Embarrassing’ Request, Asks Former Students to
Help His Online Rating," by Christopher Danzig, Above the Law, February
23, 2012 ---
http://abovethelaw.com/2012/02/unc-law-prof-sends-a-rather-embarrassing-request-asks-former-students-to-help-his-online-rating/
With the proliferation of online rating sites, an
aggrieved consumer of pretty much anything has a surprising range of avenues
to express his or her discontent.
Whether you have a complaint about your
neighborhood coffee shop or an
allegedly unfaithful ex-boyfriend, the average Joe
has a surprising amount of power through these sites.
Rating sites apparently even have the power to
bring a well-known
UNC Law
professor to his electronic knees.
It’s not every day that a torts
professor
sends his former students a “rather embarrassing
request” to repair his online reputation. It’s also certainly not every day
that the students respond en masse….
On Tuesday, Professor
Michael Corrado sent the following email to 2Ls
who took his torts class last year, basically pleading for their help (the
entire email is reprinted on the next page):
Continued in article
RateMyProfessor Site ---
http://www.ratemyprofessors.com/
"The Law School System Is Broken," National Jurist, February
2012 ---
Click Here
http://www.nxtbook.com/splash/nationaljurist/nationaljurist.php?nxturl=http%3A%2F%2Fwww.nxtbook.com%2Fnxtbooks%2Fcypress%2Fnationaljurist0212%2Findex.php#/18/OnePage
Thank you Paul Caron for the heads up
If colleges want to make it more difficult to earn
an A, they need to consider why faculty members feel pressure to award them.
Peter Eubanks
"Why We Inflate Grades," by Peter Eubanks, Inside Higher Ed,
August 9, 2011 ---
http://www.insidehighered.com/views/2011/08/09/essay_on_why_faculty_members_participate_in_grade_inflation
The University of North Carolina at Chapel Hill
made headlines recently by announcing a plan to fight grade inflation:
all grades received will be contextualized on student transcripts,
allowing graduate schools and potential employers to
see grade distributions for each course and thus to determine just how much
value to attach to those ever-prevalent As and Bs. This move is the latest
in a series of attacks on what is perceived by many (rightly) to be
an epidemic in higher education today,
particularly among those institutions that seem to do well in the national
rankings.
Student anxiety about such policies is
understandable. Graduating seniors are naturally concerned about their
competitiveness during difficult economic times, while juniors and seniors
worry that they may be passed up for fellowships, summer programs, or other
academic opportunities on account of a lowered grade-point average.
Professors, too, have their concerns about grade
deflation; we not only care about our students’ successes but also about the
implications of anti-inflation policies on our own careers. While
institutions are increasingly taking measures to combat grade inflation,
there are several key pressures faculty members face when assigning grades,
and these may cause us to feel uneasy or hesitant about immediately
subscribing to a strict regimen of grade deflation. These pressures in no
way excuse or minimize the ethical implications of grade inflation, nor do I
seek to undermine the efforts of those striving to curtail what is indeed a
significant and widespread problem in higher education today. My purpose is
only to suggest some of the underlying causes of this epidemic from a
faculty perspective; to point out some of the pressures faculty face as they
assign their students grades. These pressures, as I see it, come from three
primary sources:
Pressure from students: Most professors are
experienced in the familiar end-of-semester scene in which a student comes
to office hours to argue for a higher grade. Such discussions often involve
a student’s disputation of minutiae from past exams, papers, and
assignments, all in the hope of gaining a point or two here and there and
thus retroactively improving his or her grade. Such discussions can be quite
time-consuming, and they often come at the busiest time of the semester,
thus bringing with them the temptation to do whatever it takes to close the
matter and move along. There may also be a nagging fear that minor grading
errors have indeed been made and that the student should be given the
benefit of the doubt. With ever-increasing college costs and the inevitable
sense of student entitlement and consumerism that follow, such discussions
are becoming all too common. and are not always limited to the end of the
semester. Even more important, many faculty members dread and even fear the
negative classroom atmosphere that often results from giving students "bad"
grades (i.e.. C or below, though even a B fits this category for many),
particularly in courses dependent on student discussion and participation,
such as a seminar or a foreign language class.
Pressure from administrators: Success with
student evaluations is a career necessity, whether one is a young scholar
seeking the elusive Elysium of tenure or one belongs to that now-majority of
faculty members who teach part-time or on an adjunct basis and are dependent
on positive student evaluations for reappointment. At teaching-intensive
colleges and universities, in particular, student evaluations are often of
paramount importance, and faculty members must do what they can to keep
their customers happy. Many faculty members feel, and numerous studies seem
to suggest, that generous grade distributions correspond to positive
teaching evaluations, so many faculty members, under pressure from
administrators to produce good evaluations, feel a temptation to inflate
grades to secure their own livelihoods. Since administrators usually have
neither the time nor the expertise to make independent evaluations of a
professor’s teaching ability (imagine a dean with both the leisure and the
proficiency to sit in on and evaluate in the same semester both a Russian
literature course and an advanced macroeconomics course, without having done
any of the previous coursework...) they must rely heavily on student
descriptions of what goes on in the classroom, descriptions that are often
contradictory and that unfortunately do not always cohere.
Pressure from colleagues: Some faculty who
wish to curb grade inflation may feel that they are the only ones fighting
the problem. If everyone else is giving out inflated grades, why should they
be the ones to stand alone, only to incur the displeasure of students who
may be confused by inconsistent standards? As college freshmen arrive on
campus increasingly unprepared for college work, faculty members, inheriting
a problem passed on to them by their colleagues in secondary education,
often have the difficult task of trying to determine reasonable standards of
achievement. It takes effort and planning for faculty to balance their
professional responsibilities to both their respective disciplines and to
their students’ positive academic experience. In an era where budget cuts
affect most severely those departments and programs with low enrollments, no
one wants to lose the bidding war for students, and many professors,
particularly those in vulnerable fields, fear that a "strict
constructionist" approach to grade deflation may cost them student interest
and consequently much-needed institutional support, both of which risk being
redistributed to more favored colleagues. Furthermore, the seemingly
ubiquitous nature of grade inflation may simplify the ethical quandaries
involved: if everyone understands that grades are being unfairly inflated,
then there may, in fact, be no unfairness involved at all, since the very
transparency of grade inflation thus removes any sense of deception that may
linger in our minds.
There is a final pressure to grade inflate, and it
comes from ourselves. It may be the disquieting feeling that our own efforts
in the classroom have sometimes been inadequate, that poor student
performance reflects poor preparation or teaching on our part, and that
grades must be inflated to compensate for our failings. It may come from the
difficulties inherent in assigning grades to elusive and ultimately
unquantifiable phenomena such as class participation, essays, student
presentations, and the like. In such cases, grade inflation ceases to
function as a lazy or disinterested tool for maintaining steady waters; it
becomes, instead, a corrective measure seeking to make restitution for our
own perceived shortcomings.
Continued in article
Bob Jensen's threads on the utter disgrace of grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
"Professors Cede Grading Power to Outsiders—Even Computers: One
college gives the job to software, while another employs independent 'evaluators',"
by Jeffrey Young, Chronicle of Higher Education, August 7, 2011 ---
http://chronicle.com/article/Professors-Cede-Grading-Power/128528/
The best way to eliminate grade inflation is to
take professors out of the grading process: Replace them with professional
evaluators who never meet the students, and who don't worry that students
will punish harsh grades with poor reviews. That's the argument made by
leaders of Western Governors University, which has hired 300 adjunct
professors who do nothing but grade student work.
"They think like assessors, not professors," says
Diane Johnson, who is in charge of the university's cadre of graders. "The
evaluators have no contact with the students at all. They don't know them.
They don't know what color they are, what they look like, or where they
live. Because of that, there is no temptation to skew results in any way
other than to judge the students' work."
Western Governors is not the only institution
reassessing grading. A few others, including the University of Central
Florida, now outsource the scoring of some essay tests to computers. Their
software can grade essays thanks to improvements in artificial-intelligence
techniques. Software has no emotional biases, either, and one Florida
instructor says machines have proved more fair and balanced in grading than
humans have.
These efforts raise the question: What if
professors aren't that good at grading? What if the model of giving
instructors full control over grades is fundamentally flawed? As more
observers call for evidence of college value in an era of ever-rising
tuition costs, game-changing models like these are getting serious
consideration.
Professors do score poorly when it comes to fair
grading, according to a study published in July in the journal Teachers
College Record. After crunching the numbers on decades' worth of grade
reports from about 135 colleges, the researchers found that average grades
have risen for 30 years, and that A is now the most common grade given at
most colleges. The authors, Stuart Rojstaczer and Christopher Healy, argue
that a "consumer-based approach" to higher education has created subtle
incentives for professors to give higher marks than deserved. "The standard
practice of allowing professors free rein in grading has resulted in grades
that bear little relation to actual performance," the two professors
concluded.
Naturally, the standard grading model has plenty of
defenders, including some who argue that claims of grade inflation are
exaggerated—students could, after all, really be earning those higher
grades. The current system forges a nurturing relationship between
instructor and student and gives individualized attention that no robot or
stranger could give, this argument goes.
But the efforts at Western Governors and Central
Florida could change that relationship, and point to ways to pop any
grade-inflation bubble.
An Army of
Graders
To understand Western Governors' approach, it's
worth a reminder that the entire institution is an experiment that turns the
typical university structure on its head. Western Governors is entirely
online, for one thing. Technically it doesn't offer courses; instead it
provides mentors who help students prepare for a series of high-stakes
homework assignments. Those assignments are designed by a team of
professional test-makers to prove competence in various subject areas.
Continued in article
Jensen Comment
If grading is taken out of the hands of a teacher, one problem is that teacher
loses a big carrot/stick in motivating students to learn. There's greater
pressure from all sides that forces teachers to teach to the tests and other
aspects of the grading process that our out of her/his control.
For example, I motivated students to prepare for class by giving a quiz every
day. The quizzes motivated students because they were factored rather heavily
into final grades. If my quizzes no longer factored into final grades, my
students would have reduced incentives to prepare for each and every class.
They're more apt to wait until the last minute to cram for the examinations that
are not graded by me.
Be that as it may, I favor competency-based grading in this era of grade
inflation where teachers shivering in fear of student evaluations make courses
easier and easier ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Bob Jensen's threads on competency-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ECA
Bob Jensen's threads on computer-based assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
Bob Jensen's threads on assessment in general ---
http://faculty.trinity.edu/rjensen/Assess.htm
July 23, 2011 message from a graduate student in the Philippines
Thank you so much for sharing some write-ups about higher education
controversies such as grade inflation. I'd like to be clarified,
1) What actions constitute grade inflation? Some state universities like
Central Mindanao University of Bukidnon, Philippines, incorporate a grading
system that allows students to pass the exam if they get correct answers in
at least 50% of the total items. This is because of the term "teacher
factor" where teaching effectiveness is also considered as a contributing
factor to the failure of the students to fully understand the subject
matter. In accountancy, however, the standard is much higher at 65%
zero-based as passing rate in order to maintain the quality of students
allowed to graduate to ensure good school performance in the CPA Board
Exams. But with the grading this high at 65% zero-based, often the students,
including the brightest ones, hardly even reach 50% in total raw scores.
Because of this, the teacher evaluates first the overall test results to see
if a decent number of students got passing grades, and if not, subjectively
lowers the passing rate to allow a certain percentile range to pass. Is this
considered as grade inflation?
2) What programs or policies would you recommend to deal with grade
inflation?
July 23, 2011 reply from Bob Jensen
Grade inflation is usually defined in terms of the trends in median course
grades.
In the 1940s a median grade was a C.
"Grades on the Rise," by Jennifer Epstein, Inside Higher Ed,
March 5, 2010 ----
http://www.insidehighered.com/news/2010/03/05/grades
Grades awarded to U.S. undergraduates
have risen substantially in the last few decades, and grade inflation
has become particularly pronounced at selective and private colleges, a
new analysis of data on grading practices has found.
In
“Grading in American Colleges and Universities,”
published Thursday in Teachers College Record, Stuart Rojstaczer,
a former Duke University professor of geology, and Christopher Healy, an
associate professor of computer science at Furman University, illustrate
that grade point averages have risen nationally throughout most of the
last five decades. The study also indicates that the mean G.P.A. at an
institution is “highly dependent” upon the quality of its students and
whether it is public or private..
“There’s no doubt we are grading easier,”
said Rojstaczer,
the founder of GradeInflation.com, where he’s built a database of grades
at a range of four-year institutions since 2003.
The findings are based on historical data
dating back at least 15 years at more than 80 colleges and universities,
and contemporary data from more than 160 institutions with enrollments
totaling more than 2,000,000.
Since the 1960s, the national mean G.P.A. at the institutions from which
he’s collected grades has risen by about 0.1 each decade – other than in
the 1970s, when G.P.A.s stagnated or fell slightly. In the 1950s,
according to Rojstaczer’s data, the mean G.P.A. at U.S. colleges and
universities was 2.52. By 2006-07, it was 3.11.
Though there’s “not a simple answer as
to why we grade the way we do,” Rojstaczer speculated on several reasons
why mean G.P.A.s have increased. One factor, he said, is that faculty
and administrators “want to make sure students do well” post-graduation,
getting into top graduate schools and securing jobs of their choice.
Particularly since the 1980s, “the idea that we’re going to grade more
leniently so that our students will have a leg up has really seemed to
take hold.”
Grades have also been pushed up by
“pervasive use of teacher evaluation forms,” Rojstaczer said. “You can
tell a professor that grading easy has no impact on their evaluations …
and there are many arguments that say that’s the case, but the
perception is that it does, so professors behave in a certain way,”
giving higher grades to their students than they might if there were no
evaluation forms. (This might prove especially true at institutions with
high proportions of adjuncts, who are particularly vulnerable to losing
teaching assignments if they don't receive high student evaluations.)
Another possible reason: students’
expectations. At private institutions, students are consumers expecting
that their diplomas and transcripts be worth what they (or their
parents) have paid for them. At more selective institutions, students
enter with ever-higher high school G.P.A.s and “you don’t want the
student to come to your office in tears for a B or C,” Rojstaczer said.
In their analysis of contemporary
grading data, he and Healy found that, on a 4.0 scale, G.P.A.s at
private colleges and universities were 0.1 point higher than at publics
admitting students with identical combined math and verbal SAT scores.
Among institutions with equal selectivity – measured by the average of
the percentage of students with high school G.P.A.s above 3.75, the
percentage of students who graduated in the top decile of their high
school class and the percentage of applicants rejected – students at
privates had G.P.A.s 0.2 higher than their peers at publics.
The data also support the
commonly-held opinion that engineers’ G.P.A.s tend to be lower than
those of students who major in the humanities or social sciences.
But the study does not take into
account economic factors or broader national data, which is problematic
to Clifford Adelman, a senior associate at the Institution for Higher
Education Policy, who in the past has been critical of
GradeInflation.com.
Adelman authored a chapter in 2008’s
Grade Inflation: Academic Standards in Higher Education
in which he argued that longitudinal data
from the Department of Education’s National Center for Education
Statistics suggested that grade inflation was not a major trend of the
last few decades. “Unobtrusive national data are of no interest to folks
who labor to build what are essentially quantitative anecdotes into a
preferred story, and the unobtrusive national data tell a very different
story.”
Rojstaczer and Healy’s study, he
added, “doesn’t cite anything that doesn’t support a position based on
fragmentary, fugitive data … and (with the exception of one article)
completely ignores the economic literature."
Continued in article
Bob Jensen's threads on grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
One of the selling points of for-profit universities is that they are more
open to non-traditional students vis-à-vis nonprofit traditional colleges and
universities. This is thus a "diversity" selling point for for-profit
universities.
However, one of the drawbacks is that when traditional colleges and
universities attempt to be more open to diversity and admission of
non-traditional students, there are huge problems of enforcing academic
standards and serious possibilities that most of the non-traditional students
will not graduate.
Here's how some for-profit universities deal unethically with assessment
issues. It's a small wonder that for-profit universities are very popular with
non-traditional students.
"Undercover Probe Finds Lax Academic Standards at Some For-Profit Colleges,"
by Kelly Field, Chronicle of Higher Education, November 22, 2011 ---
http://chronicle.com/article/Undercover-Probe-Finds-Lax/129881/?sid=wc&utm_source=wc&utm_medium=en
An undercover investigation by the Government
Accountability Office has found evidence of lax academic standards in some
online for-profit programs.
The probe, which is described in a
report
made public Tuesday, found that staff at six of the 12
colleges that enrolled the investigators tolerated plagiarism or awarded
credit for incomplete or shoddy work.
The release of the report, "For-Profit Schools:
Experiences of Undercover Students Enrolled in Online Classes at Selected
Colleges," comes roughly a year after the accountability office
revised an earlier report on recruiting abuses at
for-profit colleges, acknowledging errors and omissions in its findings. A
coalition of for-profit colleges has
sued the office over that report, accusing its
investigators of professional malpractice.
In that earlier investigation, the office sent
undercover investigators to 15 for-profit colleges to pose as prospective
students. It
found widespread deception in recruiting by the
colleges, with many employees providing students with false or misleading
information about graduation rates, job prospects, or earning potential.
This time, the agents attempted to enroll in online
programs at 15 for-profit colleges using a home-school diploma or a diploma
from a closed high school. Twelve of the colleges accepted them.
The "students" then proceeded to skip class,
plagiarize, and submit "substandard" work. Though several ultimately failed
their classes, some got credit for shoddy or plagiarized work along the way.
At one college, a student received credit for six
plagiarized assignments; at another, a student submitted photos of political
figures and celebrities in lieu of an essay, but still earned a passing
grade. A third student got full credit on a final project, despite
completing only two of the three required components. That same student
received full credit for an assignment that had clearly been prepared for
another class.
In two cases, instructors confronted students about
their repeated plagiarism but took no disciplinary action against them. One
student received credit for a response that was copied verbatim from other
students' discussion posts.
Instructors at the other six colleges followed
their institutions' policies on grading and plagiarism, and in some cases
offered to help students who appeared to be struggling.
All of the students ultimately withdrew or were
expelled from the programs. Three of the colleges failed to provide the
departing students with federally required exit counseling about their
repayment options and the consequences of default.
Sen. Tom Harkin, Democrat of Iowa, who requested
the report, said its findings "underscore the need for stronger oversight of
the for-profit education industry."
"It is obvious that Congress must step in to hold
this heavily federally subsidized industry more accountable," he said.
Continued in article
Jensen Comment
This makes me wish that similar investigations (audits?) be expanded to huge
samples of nonprofit colleges and universities where
grade inflation is also rampant.
Most universities now have financial internal auditors and are subjected to
governmental or independent CPA audits. But few have independent audits of the
variability in academic standards between departments and between individual
faculty members.
Bob Jensen's threads on For-Profit Universities Operating in the Gray Zone
of Fraud ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm
November 28, 2011 reply from David Albrecht
Bob, I agree with your comment that the study could
have been expanded. As it is, the study is hardly scientific. The sample
size is small, and we have no idea whether lax standards, instructor
negligence, or instructor mercy are responsible for the actions. In
traditional schools, whether they be state funded or private, I wonder if
more abuses would be found among tenure-track or non-tenure-track profs.
Dave Albrecht
November 28, 2011 reply from Bob Jensen
Hi David,
In my opinion, grade inflation and lax academic standards may be more of a
problem for tenured professors than probationary (non-tenured) professors on
tenure track and maybe even for adjunct professors (but adjuncts are so
variable it's hard to draw generalizations).
I will provide an example of non-tenured faculty who are on tenure tracks at
Trinity University. Such probationary faculty are under severe scrutiny by
their immediate departmental faculty and upper-level university committees.
There's heavy pressure on all faculty involved to warn probationary faculty
about inadequate versus adequate progress toward tenure. The hope is that
all nontenured faculty not making adequate progress by year six will have
been terminated such that all faculty going up for tenure have highly
probable chances of not being rejected.
Included in what Trinity calls "probationary reviews" as well as final
"tenure applications" are teaching evaluations, grading distributions for
each course, copies of examinations in each course, copies of course
syllabi, and
self-review statements of candidates. There are also external (off-campus)
reviews in tenure applications, but these are mostly focused on research and
publication.
Tenured faculty are not subjected to such rigorous reviews, and hence a few
tenured faculty in my viewpoint become more lax about academic standards.
Hopefully these are just outliers. There is a rigorous review of associate
professors at times when they apply for full professorships. These are much
like tenure applications and require a truckload of teaching evaluations,
grading distributions for each course, copies of examinations in each
course, copies of course
syllabi, and
self-review statements of candidates. There are also external (off-campus)
reviews in full-professorship applications, but these are mostly focused on
research and publication.
In my 24 years at Trinity University I was completely surprised by
proportion of hired tenure track faculty that were terminated before even
reaching the tenure application stage. I was also even more surprised by
some of the tenure applicants and full-professor applicants who were
rejected by the P&T Committee and/or the President of the University.
I was also surprised in some years by the some of the long-term tenured
faculty (some of whom were lifetime associate professors) who had their
tenure contracts bought out by deals made with the President of the
University. In some cases those buyouts were either for lackluster teaching
and/or lackluster academic standards.
Of course there were also a few faculty members who had some other
dysfunctional behavior leading to buyouts. One of my friends had an early
onset of dementia and was somewhat of a problem even after termination (on a
generous early retirement package), because he continued to hang around
computer labs and the campus library and showed off his vanity press
"research" book that was garbage to the point of embarrassment. He claimed
that proper exercise could prevent all forms of cancer.
Some campus officials and faculty, including me, breathed a sigh of relief
when he eventually died and stopped giving his vanity press book away for
free around Texas.
Of course there are also those who will breathe a sigh of relief when one of
their retired faculty members stops sending so many messages to the AECM.
Respectfully,
Bob Jensen
"The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning," (Not Free),
Chronicle of Higher Education, November 2011 ---
https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc
The Chronicle's special report on Online Learning explores how calls for
quality control and assessment are reshaping online learning.
As online learning spreads throughout higher
education, so have calls for quality control and assessment. Accrediting
groups are scrambling to keep up, and Congress and government officials
continue to scrutinize the high student-loan default rates and aggressive
recruiting tactics of some for-profit, mostly online colleges. But the push
for accountability isn't coming just from outside. More colleges are looking
inward, conducting their own self-examinations into what works and what
doesn't.
Also in this year's report:
- Strategies for teaching and doing research
online
- Members of the U.S. military are taking online
courses while serving in Afghanistan
- Community colleges are using online technology
to keep an eye on at-risk students and help them understand their own
learning style
- The push to determine what students learn
online, not just how much time they spend in class
- Presidents' views on e-learning
Bob Jensen's threads on asynchronous learning ---
http://faculty.trinity.edu/rjensen/255wp.htm
Bob Jensen's threads on online course and degree programs ---
http://faculty.trinity.edu/rjensen/Crossborder.htm
This creates perverse incentives for professors to
demand little and give out good grades. (Indeed, the 36 percent of students in
our study who reported spending five or fewer hours per week studying alone
still had an average G.P.A. of 3.16.) On those commendable occasions when
professors and academic departments do maintain rigor, they risk declines in
student enrollments. And since resources are typically distributed based on
enrollments, rigorous classes are likely to be canceled and rigorous programs
shrunk. Distributing resources and rewards based on student learning instead of
student satisfaction would help stop this race to the bottom.
"Your So-Called Education," by Richard Arum and Josipa Roksa, The New
York Times, May 14, 2011 ---
http://www.nytimes.com/2011/05/15/opinion/15arum.html?_r=1&nl=todaysheadlines&emc=tha212
. . .
In a typical semester, for instance, 32 percent of
the students did not take a single course with more than 40 pages of reading
per week, and 50 percent did not take any course requiring more than 20
pages of writing over the semester. The average student spent only about 12
to 13 hours per week studying — about half the time a full-time college
student in 1960 spent studying, according to the labor economists Philip S.
Babcock and Mindy S. Marks.
¶ Not surprisingly, a large number of the students
showed no significant progress on tests of critical thinking, complex
reasoning and writing that were administered when they began college and
then again at the ends of their sophomore and senior years. If the test that
we used, the Collegiate Learning Assessment, were scaled on a traditional
0-to-100 point range, 45 percent of the students would not have demonstrated
gains of even one point over the first two years of college, and 36 percent
would not have shown such gains over four years of college.
¶ Why is the overall quality of undergraduate
learning so poor?
. . .
Fortunately, there
are some relatively simple, practical steps that colleges and universities
could take to address the problem. Too many institutions, for instance, rely
primarily on student course evaluations to assess teaching. This creates
perverse incentives for professors to demand little and give out good
grades. (Indeed, the 36 percent of students in our study who reported
spending five or fewer hours per week studying alone still had an average
G.P.A. of 3.16.) On those commendable occasions when professors and academic
departments do maintain rigor, they risk declines in student enrollments.
And since resources are typically distributed based on enrollments, rigorous
classes are likely to be canceled and rigorous programs shrunk. Distributing
resources and rewards based on student learning instead of student
satisfaction would help stop this race to the bottom.
Others involved in
education can help, too. College trustees, instead of worrying primarily
about institutional rankings and fiscal concerns, could hold administrators
accountable for assessing and improving learning. Alumni as well as parents
and students on college tours could ignore institutional facades and focus
on educational substance. And the Department of Education could make
available nationally representative longitudinal data on undergraduate
learning outcomes for research purposes, as it has been doing for decades
for primary and secondary education.
Most of all, we
hope that during this commencement season, our faculty colleagues will pause
to consider the state of undergraduate learning and our collective
responsibility to increase academic rigor on our campuses.
Richard Arum, a professor of sociology and
education at New York University, and Josipa Roksa, an assistant
professor of sociology at the University of Virginia, are the authors of
“Academically Adrift: Limited Learning on College Campuses.”
Bob Jensen's threads on teaching evaluations and grade inflation are at
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Professors have become a major part of the problem of declining
expectations for student performance
"Lower Education," by Michael Morris, Inside Higher Ed,
September 9, 2011 ---
Click Here
http://www.insidehighered.com/views/2011/09/09/morris_essay_on_faculty_responsibility_for_decline_in_college_student_standards
Toby (not his real name) flunked a graduate course
I taught last year. He failed the in-class assignment (a mid-term essay
exam) as well as the out-of-class assignments (a couple of case analyses and
a take-home exam). Reviewing Toby’s work was excruciating; extracting
coherence from his paragraphs was a futile exercise, even with repeated
readings. Theoretical analysis in his writing was virtually nonexistent. Put
simply, this was an academic train wreck.
As I interacted with Toby over the course of the
term, I kept asking myself, “How did this pleasant young man ever manage to
obtain an undergraduate degree?” He certainly had one, awarded by a
regionally accredited institution (not mine). And how did he get into yet
another institution (my institution, but not my program) to pursue a
master’s degree?
Welcome to the world of Lower Education. Toby’s
case may be extreme, but it underscores a fundamental reality that shapes a
major segment of higher education in the United States: Colleges cannot
survive without students, so colleges that have a difficult time competing
for the “best” students compete for the “next best” ones. And colleges that
have trouble securing the “next best” students focus on the “next-next
best” ones, and on and on and on, until a point is reached where the word
“best” is no longer relevant. When this occurs, students who are not
prepared to be in college, and certainly not prepared to be in graduate
school, end up in our classrooms.
This is not startling news. It’s a rare college or
university that does not have an academic remediation/triage center of some
kind on campus, where an enormous amount of time is spent teaching students
skills they should have learned in high school. To be sure, many of these
unprepared students drop out of college before graduation, but a significant
percentage do make it to the finish line. Some of the latter will have
indeed earned their degree through great effort and what they’ve learned
from us. But others will have muddled through without displaying the skills
we should require of all students. My 35 years of university experience tell
me that in these cases faculty collusion is often a contributing factor.
What is the nature of this collusion? In far too
many instances, little is required of students in terms of the quality and
quantity of their academic work, little is produced, and the little produced
is, to put it mildly, graded generously. Some might argue that the
mind-numbing proportions of A’s we often see these days, along with the
relative scarcity of low grades, is a reflection of more effective teaching
strategies being employed by professors, coupled with a growing population
of bright students committed to academic excellence. Unfortunately, this
uplifting scenario strikes me as much less persuasive than one that
implicates factors such as transactional/contract grading (“5 article
reviews equal an A, 4 equals a B,” etc.), faculty who wish to avoid arguing
with increasingly aggressive students about grades, faculty who believe that
awarding high grades generates positive student evaluations, faculty who
express their philosophical opposition to grading by giving high grades, and
the growing percentage of courses taught by part-time and non-tenure-track
faculty members who might see the assigning of a conspicuous number of low
grades as a threat to their being re-hired.
One of the most pernicious consequences of this
state of affairs is cynicism toward higher education among those most
directly responsible for delivering higher education -- the faculty.
Research suggests that one of the most powerful sources of motivation for
outstanding employee performance is goal/value internalization. This occurs
when espoused organizational goals and values are “owned” by organizational
members, who then strive to achieve the goals and live up to the values in
their work. Colleges and universities have traditionally been in a
privileged position with respect to drawing upon this type of motivation,
given their educational mission. The beliefs associated with this mission
can include a sizable chunk of myth, but as societal myths go, the ones
embraced by higher education (e.g., the ability of research, knowledge, and
analytical skill to enhance the public good) tend to have high social value.
In the current zeitgeist, however, many faculty are
dismayed to see the provision of educational credentials trumping the
actual provision of education. (Fifty might not be the new forty, but
the master’s degree is certainly the new bachelor’s.) This perception is
enhanced by a proliferation of curriculum-delivery formats (weekend courses,
accelerated and online programs, etc.) whose pedagogical soundness often
receives much less attention than the ability of the formats to penetrate
untapped educational markets. It is difficult for a strong commitment to
academic integrity to thrive in such environments.
Faculty who are distressed over all of this should
not wait for presidents, provosts and deans to rescue higher education from
itself. Moreover, regional accrediting bodies, despite their growing
emphasis on outcomes assessment, do not typically focus on courses, programs
and admissions standards in a way that allows them to adequately address
these issues. For the most part it is faculty who teach the classes,
design and implement curricula, and, at least at the graduate level,
establish admissions policies for programs. What should faculty do? I offer
three modest suggestions:
- At the departmental level, work to develop a
culture where expectations for student performance are high. When
faculty members believe that teaching challenging courses is “the way we
do things here,” they are less likely to offer non-challenging ones.
- Advocate throughout the institution for the
centrality of academic quality to policy making, program development,
and program implementation. The question “What are we doing to ensure
that X embodies a commitment to academic excellence?”
should never be left implicit.
- Create opportunities for faculty and
administrators to come together in small groups to explore the issues
raised by Lower Education. These two constituencies need to find a way
to collaborate more effectively, and the mutual stereotyping that
frequently characterizes their relationship represents a major obstacle.
If we want our conversations relevant to Lower Education to change,
let’s experiment with changing the structure within which some of those
conversations take place.
Continued in article
Jensen Comment
In addition to lowered admission standards, an even bigger problem is the grip
students have over teachers due to fears teachers have of poor student
evaluations, including those acerbic evaluations on RateMyProfessor that are
available for the entire world to view ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Accounting Professors are the Least Hot Business Professors (according to
students)
Just in case you didn't notice, Finance professors
were rated as the hottest among the business disciplines (and accounting was
rated least hot). So if you're deciding between a PhD in Finance and Accounting,
if you want hotter colleagues, choose Finance, but if you want to look better by
comparison, go with accounting.
The Unknown Professor, Financial Rounds Blog, January 29, 2009 ---
http://financialrounds.blogspot.com/
Jensen Comment
Although the Financial Rounds Blog has a lot of tongue in cheek, caution
should be seriously noted about electing to go into a finance doctoral program.
Demand for finance graduates may be down for a long, long time which, in turn,
will affect the demand for new PhD graduates in economics and finance. But I've
not seen anywhere that the demand for accounting PhD graduates will be
relatively low for the long haul (apart from the short term budget crises
colleges are having these days that in many cases has frozen virtually all
hiring). In fact, a lot of undergraduate finance majors may be shifting over to
accounting, thereby creating more need for accounting professors.
Apart from short term hiring freezes, the number of new PhDs in accounting is
greatly in short supply such that it's probably better to consider job
opportunities and to lower expectations about being rated as hot on campus ---
http://faculty.trinity.edu/rjensen/theory01.htm#DoctoralPrograms
Question
What disciplines on campus have the hottest professors?
Answer ---
Click Here
"Attractiveness, Easiness, and Other Issues: Student Evaluations of
Professors on RateMyProfessors.com," by James Felton Central Michigan
University, Peter T. Koper, John Mitchell, and Michael Stinson, SSRN,
July 2006 ---
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=918283
Question
What criterion emerges as the single most important criterion for professorial
ratings on RateMyProfessor.com?
Answer
Grading. Grade inflation has been heavily impacted by the rise in the use of
required teaching evaluations for performance and tenure evaluations ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
Evaluating Teachers: The Important Role of Value-Added ---
http://www.brookings.edu/~/media/Files/rc/reports/2010/1117_evaluating_teachers/1117_evaluating_teachers.pdf
American RadioWorks: Testing Teachers (radio broadcast) ---
http://americanradioworks.publicradio.org/features/testing_teachers/
Teaching Evaluations Lead to Grade Inflation
Stanley Fish ---
http://en.wikipedia.org/wiki/Stanley_Fish
June 30, 2010 message from Scribner, Edmund
[escribne@AD.NMSU.EDU]
Jim, Bob, et al.,
One philosophy of teaching
(popular in the 1990s when TQM was at center stage) is to treat it like
manufacturing, where students are viewed as co-workers and learning is
viewed as the product. The grade of "A" becomes more or less like the grade
of "Pass" in that failure to receive an "A" represents a quality (Q)
failure. Students keep working until their output is "Q."
BTW, if Stanley Fish's two
recent blog entries on student ratings have appeared on AECM, I've missed
them:
Part One:
http://opinionator.blogs.nytimes.com/2010/06/21/deep-in-the-heart-of-texas/
Part Two:
http://opinionator.blogs.nytimes.com/2010/06/28/student-evaluations-part-two/
Ed Scribner
New Mexico State
Las Cruces, NM, USA
-----Original Message-----
From: AECM, Accounting Education using Computers and Multimedia [mailto:AECM@LISTSERV.LOYOLA.EDU]
On Behalf Of James R. Martin/University of South Florida
Sent: Thursday, June 24, 2010 11:35 AM
To:
AECM@LISTSERV.LOYOLA.EDU
Subject: Re: Student Evaluations
Bob,
All the professional exams are
pass fail, but students are still motivated to study. So I don't think
motivation to study is really much of a defense for grading and ranking. If
we set the level of a passing performance high enough, then everyone who
expects to pass will study and perhaps work together with others as a real
team to learn when they are not competing with each other for grades. More
cooperation, more learning, everybody wins. Perhaps the university would
produce a better product, the value of the degree would increase, and
everybody wins. We just assume we have to grade and rank everyone because
that's the way the system was designed.
The argument that employers
would not know who to hire does not stand up either. I think our purpose
should be to educate, not to screen people for employment. As one author put
it "What other industry or organization rates its products from A through F
and worries about grade inflation?"
Jim
Earlier Message from Bob Jensen
Hi Jim,
I take a less extreme stance. I think we
should simply go back to the old days were teaching evaluations went to the
instructors and nobody else.
Of course in the good old days we did not have
RateMyProvessor.com where students can (selectively) evaluate faculty
without any permissions or controls (other than extremely profane, sexist,
and otherwise defamatory posts).
Bob Jensen
Bob Jensen's threads on teaching evaluations and grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
"NLJ: The Impact of Higher Bar Passage Requirements on Law
School Diversity," by Paul Caron, TaxProf Blog, July 18, 2011 ---
http://taxprof.typepad.com/
NLJ: The Impact of Higher Bar Passage Requirements on Law School
Diversity
National Law Journal,
ABA Again Confronts the Diversity Dilemma: A Renewed Push for Higher Law
School Standards Has its Downside, by Karen Sloan:
[The ABA] is trying to reconcile the legal
profession's need for greater diversity with its desire to push law
schools to better prepare students to pass the bar. For the second time
in four years, it is considering raising the minimum bar-passage-rate
requirement as part of a comprehensive review of law school
accreditation standards. ...
The hope is that higher standards would push
schools with lower passage rates to invest more in academic support and
bar preparation. ... They also would serve a consumer-protection
function, assuring law students a reasonable expectation of passing the
bar.
The ABA has already signaled that it takes
bar-passage rates seriously. It
revoked provisional accreditation from the University of La Verne
College of Law in Ontario, Calif., in June
because of the school's low bar-passage rates. In 2009, a scant 34% of
La Verne students passed the California bar examination on the first
try, and the school's first-time bar-passage rate was 53% in 2010 —
improved, but still not good enough, according to the ABA.
Applying a bright-line bar-passage standard is
a fairly new idea for the ABA. Before 2008, the ABA spelled out no
specific bar-passage minimum. Instead, it enforced what was called the
"70/10 Rule": At least 70% of the school's first-time bar takers had to
pass the exam in the school's home state. In the alternative, the
first-time bar-pass rate could be no lower than 10% below the average of
other ABA-accredited schools in that state.
The U.S. Department of Education, which has
authorized the ABA to be the national accreditor of law schools, asked
for a clearer standard in 2007. After protracted wrangling, the ABA
adopted a requirement that at least 75% of a law school's graduates pass
the bar exam in at least three of the past five years. Schools can also
meet the standard if their first-time bar-passage rate is no more than
15% below other ABA schools in the same state during three of the past
five years. The 15% requirement is intended to level the playing field
across states, given that passage rates vary widely depending on
jurisdiction. The outcome was a compromise, representing a minimum
standard higher than what diversity advocates wanted but lower than the
initial proposal. ...
The new proposal would require that at least
80% of graduates pass the bar in three of the past five years, or that
first-time bar-passage rates be no more than 10% below other schools in
the same state — bringing the standards closer to the test used before
2008.
Turkey Times for Overstuffed Law Schools ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#OverstuffedLawSchools
Question
If median grades for each course are made publically available on the Internet,
will students seek out the high grade average or low grade average courses?
Examples of such postings at Cornell University are at
http://registrar.sas.cornell.edu/Student/mediangradesA.html
Hypothesis 1
Students will seek out the lower grade average courses/sections thinking that
they have a better chance to compete for high grades.
Hypothesis 2
Students will seek out the higher grade average courses/sections thinking that
particular instructors are easier graders.
However, when Cornell researchers studied about
800,000 course grades issued at Cornell from 1990 to 2004, they found that most
students visited the site to shop for classes where the median grade was higher.
Plus, professors who tended to give out higher grades were more popular.
Students with lower SAT scores were the most likely to seek out courses with
higher median grades.
"Easy A's on the Internet: A surprising Cornell experiment in posting
grades; plus a look at recent research into ethical behavior, service charges,
and volunteer habits," by Francesca Di Meglio, Business Week, December
11, 2007 ---
http://www.businessweek.com/bschools/content/dec2007/bs20071211_885308.htm?link_position=link2
In a striking
example of unintended consequences, a move by Cornell
University to give context to student grades by publicly
posting median grades for courses has resulted in exactly
the opposite student behavior than anticipated.
Cornell's College of Arts & Sciences originally set up a
Web site in 1997 where median
grades were posted, with the intention of also printing
median class grades alongside the grade the student actually
received in the course on his or her permanent transcript.
Administrators thought students would use the information on
the Web site to seek out classes with lower median
grades—because, they reasoned, an A in a class that has a
median grade of B-minus would be more meaningful than say,
an A in a course where the median was A-plus.
Course Shopping Leads to Grade Inflation
However,
when Cornell researchers studied about 800,000 course grades
issued at Cornell from 1990 to 2004, they found that most
students visited the site to shop for classes where the
median grade was higher. Plus, professors who tended to give
out higher grades were more popular. Students with lower SAT
scores were the most likely to seek out courses with higher
median grades.
This
"shopping" in turn led to grade inflation, Vrinda Kadiyali,
associate professor of marketing and economics at Cornell's
Johnson Graduate School of Management,
one of the authors, explained in an
interview. The study, which is undergoing peer review, has
not yet been published.
So far,
however, the university has posted the median course grades
only on the Internet and has not yet put those grades on
transcripts. According to an article in the Cornell
Daily Sun, the school will start posting the grades
on transcripts in the spring. School officials were not
immediately available for comment.
The research
team hopes the school follows through on its plans. "That
will allow Cornell to hold itself to a higher standard
because it lets potential employers know where students
stand relevant to other students," says Kadiyali.
The presence
of the median grade data is well-known to students but less
well-known to faculty. The researchers themselves were
prompted to do the study when one of them learned of the Web
site from a student questioning grades in her course.
Kadiyali says the formula the researchers used to come to
these conclusions could easily be applied to Internet
teacher rating sites, such as
ratemyprofessors.com. It's
something educators should consider, she adds, to find out
how these posts affect the decision-making of students and,
thus, professors and their courses.
Jensen Comment
The problem is that, in modern times, grades are the keys to the kingdom (i.e.,
keys unlocking the gates of graduate studies and professional careers) such that
higher grades rather than education tend to become the main student goals. A
hundred years ago, just getting a degree could open postgraduate gates in life
because such a small proportion of the population got college diplomas. With
higher percentages of the population getting college diplomas, high grades
became keys to the kingdom. In many colleges a C grade is viewed as very nearly
a failing grade.
At the same time, formal teaching evaluations and teacher rating sites like
ratemyprofessors.com have led to marked grade inflation in virtually all
colleges. The median grades are often A, A-, B+, or B. The poor student's C
grade is way below average. Just take a look at these course medians from
Cornell University ---
http://registrar.sas.cornell.edu/Grades/MedianGradeSP07.pdf
December 19, 2007eply from a good friend who is
also a university-wide award winning teacher
I'm not for easy grading, but I also wonder some
about this study. Could it be that the MORE EFFECTIVE instructors are also
easier graders and vice versa? I have no idea, but I'd like to see a control
for this variable.
And God help us if a professor is popular! What an
awful trait for an educator to have!
Jeez!
December 20, 2007 reply from Bob Jensen
Dear Jeez,
The terms "easy grader" and "easy grading"
are probably not suited for hypothesis testing. They are too hard to
precisely define. Some, probably most, "easy graders" counter by saying that
they are just better teachers and the students learned more because of
superior teaching. In many cases, but certainly not all cases, this is
probably true. Also, it is almost impossible to distinguish easy grading
from easy content. Students may learn everything in a course if the course
is easy enough to do so.
Instructors will also counter that they are
ethical in the sense of scaring off the poor students before the course
dropping deadlines. Instructors who snooker poor students to stay in their
courses and then hammer them down later on can show lower median grades
without punishing better students with C grades. Fortunately I don't think
there are many instructors who do this because they then face the risk of
getting hammered on teaching evaluations submitted by the worst students in
the course.
Easy grading/content is a lot like
pornography. It's probably impossible to precisely define but students know
it when they shop for easier courses before registering. It may be
possible to a limited extent to find easy graders in multiple section
courses having common examinations. For example, I was once a department
chair where our two basic accounting courses had over 30 sections each per
semester. But even there it is possible that all instructors were relatively
"easy" when they put together the common examinations.
It is widely known that nearly every college
in the U.S. suffers from grade inflation. Only an isolated few have been
successful in holding it down. College-wide grade averages have swung way
above C grades and in some instances even B grades. It is typical any more
for median grades of a college to hit the B+ or A- range, and in many
courses the median grade is an A.
The Cornell study sited above covering
800,000 course grades (a lot) did not identify easy graders. It identified
courses/sections having higher median grades. Higher median grades may not
signify easy grading or easy content, but students seem to know what they
are shopping for and the Cornell study found that students do shop around
for bargains. My guess is that the last courses left on the shelf are those
with median grades in the C range.
Bob Jensen
Bob Jensen's threads on grade inflation are
at
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
The Devil is in the Details Not Discussed in This Report (but then we
never expected these unions to agree on learning assessment details)
"What Faculty Unions Say About Student Learning Outcomes Assessment,"
by Larry Gold (AFT), Gary Rhoades (AAUP), Mark Smith (NEA) & George Kuh (NILOA),
Occasional Paper No. 9 ---
http://www.learningoutcomeassessment.org/documents/Union.pdf
Faculty unions are under great pressure to become more focused on learning
performance and educational reforms apart from the traditional protectionism and
work rules focus of these unions. This report is an important start down the
assessments road. But when it gets down to details, these unions may never agree
on output assessment details (beyond having teachers subjectively grade their
students without any grade inflation restraints).
To my knowledge faculty unions have not taken any significant initiatives to
stop the greatest educational quality embarrassment at the K-20 levels ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
I would be more supportive of faculty unions if they set out with determination
to reverse the widespread cancer of grade inflation in the United States.
Instead they've contributed to the spread of this deadly disease.
I could be wrong about some of this and would greatly appreciate knowing
about significant efforts of teachers' unions to reverse grade inflation.
Possible details for outcomes assessment are discussed below.
Question
Can Intermediate II or Principles II or Tax II instructors best identify poor
teaching and/or overly generous grading in prerequisite courses?
"One Measure of a Professor: Students' Grades in Later Courses:
Course sequences may indicate instructors' strengths, but colleges find the data
hard to tease out," by David Glen, Chronicle of Higher Education, January
9, 2011 ---
http://chronicle.com/article/One-Measure-of-a-Professor-/125867 /
According to one widely circulated grading
template, an A should signify that a student is "unquestionably prepared for
subsequent courses in the field."
But if a History 101 professor hands out buckets of
A's to students who really aren't prepared for intermediate courses, it is
possible that no one (other than the intermediate-course instructors) will
notice the problem. Some departments informally keep tabs on students'
preparedness, but almost no colleges systematically analyze students'
performance across course sequences.
That may be a lost opportunity. If colleges looked
carefully at students' performance in (for example) Calculus II courses,
some scholars say, they could harvest vital information about the Calculus I
sections where the students were originally trained. Which Calculus I
instructors are strongest? Which kinds of homework and classroom design are
most effective? Are some professors inflating grades?
Analyzing subsequent-course preparedness "is going
to give you a much, much more-reliable signal of quality than traditional
course-evaluation forms," says Bruce A. Weinberg, an associate professor of
economics at Ohio State University who recently scrutinized more than 14,000
students' performance across course sequences in his department.
Other scholars, however, contend that it is not so
easy to play this game. In practice, they say, course-sequence data are
almost impossible to analyze. Dozens of confounding variables can cloud the
picture. If the best-prepared students in a Spanish II course come from the
Spanish I section that met at 8 a.m., is that because that section had the
best instructor, or is it because the kind of student who is willing to wake
up at dawn is also the kind of student who is likely to be academically
strong? Performance Patterns
To appreciate the potential power of
course-sequence analysis—and the statistical challenges involved in the
work—consider a study whose findings were published last year in the Journal
of Political Economy. Two economists analyzed more than 10,000 students'
performance over a seven-year period at the U.S. Air Force Academy.
The scholars found several remarkable patterns in
the data—patterns, they say, that might never have been noticed without this
kind of analysis.
For one thing, students' grades in intermediate
calculus courses were better (all else equal) if they had taken Calculus I
in a section taught by a senior, permanent faculty member, as opposed to a
short-term instructor drawn from the Air Force's officer corps. The "hard"
introductory sections, where students tended to struggle, yielded stronger
performances down the road.
One reason for that, the authors speculate, might
be that novice instructors of Calculus I taught to the test—that is, they
focused narrowly on preparing their students to pass the common final exam
that all Calculus I sections must take.
"It may be that certain faculty members guide their
students more toward direct memorization, rather than thinking more deeply
and broadly," says James E. West, a professor of economics at the Air Force
Academy, who was one of the study's authors. "The only way to really get at
this would be direct classroom observation. We're economists, so that's
outside our area of expertise."
A second discovery was that when students took
Calculus I from permanent faculty members, they were more likely to later
choose to take elective upper-level mathematics courses during their junior
and senior years.
"Even though associate and full professors produce
students who do significantly worse in the introductory course, their
students do better in the follow-on course," says the paper's second author,
Scott E. Carrell, an assistant professor of economics at the University of
California at Davis. "They're motivating these students to actually learn
mathematics."
Finally, Mr. Carrell and Mr. West looked at student
course evaluations. They found that students' Calculus I course evaluations
were positively correlated with their grades in that course but negatively
correlated with their grades in subsequent calculus courses. The more
students liked their Calculus I section, the less likely they were (all else
equal) to earn strong grades in the follow-up courses.
The same pattern held even when the scholars looked
only at the single question on the course-evaluation form that asked
students how much they had learned in Calculus I.
Students, this study suggests, are not always
accurate judges of how much progress they have made.
Mr. Carrell and Mr. West can say all of this with a
great deal of confidence because the Air Force Academy is not like most
places. Course sequences there are vastly easier to follow than at the
average civilian college.
All students at the academy are required to take a
common core of 30 credits. No matter how much they might hate Calculus I,
they still have to take Calculus II. Most course sections are small—about 20
students—and students have no discretion in choosing their sections or
instructors. Finally, every Calculus I section uses the same common tests,
which are graded by a pool of instructors. (One instructor grades Question 1
for every section, another instructor grades Question 2, and so on.)
All those factors make the Air Force Academy a
beautifully sterile environment for studying course sequences.
Mr. West and Mr. Carrell didn't have to worry that
their data would be contaminated by students self-selecting into sections
taught by supposedly easy instructors, or male instructors, or any other
bias. They didn't have to worry about how to account for students who never
took the follow-up courses, because every student takes the same core
sequence. And they didn't have to worry about some instructors subtly
grading the tests more leniently than others.
"These data," Mr. West says, "are really an order
of magnitude better than what you could get at a typical college." Other
Courses, Other Colleges
It wouldn't be worth the effort, Mr. Carrell says,
to try to crunch such numbers from his own campus, Davis. "If the good
students select the good teachers or the lazy students select the easy
teachers," he says, "then it's really hard to disentangle those selection
effects from the causal effect of the teacher. You just can't measure
motivation and that sort of thing."
But other scholars disagree. Course-sequence
studies, they say, can yield valuable information even if they aren't as
statistically pristine as the Air Force Academy's.
"Every university registrar has access to this kind
of data," says Valen E. Johnson, a professor of biostatistics at the
University of Texas's M.D. Anderson Cancer Center. "And at every university,
there are quite a few courses that are taught in sequence. So there are a
lot of opportunities to study the factors that predict subsequent success in
a field."
All that is required, Mr. Johnson says, is to
statistically control for the students' abilities and dispositions, using
proxies such as their standardized-test scores and their high-school class
rank. "Even just using their raw college GPA isn't too bad," Mr. Johnson
says.
A decade ago, when Mr. Johnson was on the faculty
of Duke University, he analyzed a huge cache of data from that institution.
In that project—which he summarized in Grade Inflation: A Crisis in College
Education (Springer-Verlag, 2003)—he looked at 62 courses in the spring-1999
semester that had prerequisite courses that had been taught in multiple
sections in the fall of 1998.
Continued in article
Bob Jensen's threads on grade inflation are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
"Student Evaluations, Grade Inflation, and Declining Student Work Effort,"
by Richard Vedder, The Chronicle of Higher Education, June 19, 2010 ---
http://chronicle.com/blogPost/Student-Evaluations-Grade/24926/
The Chronicle's Susannah Tully has brought
my attention to a
great article in the prestigious Journal of
Political Economy by Scott Carrell and James West dealing with
professorial approaches to teaching, student evaluations and student
performance. It seems professors who do more than teach the basic bare-bones
knowledge and are in some sense more rigorous tend to get poorer student
evaluations (no surprise there). The less rigorous professors even get good
performances out of their students in the courses taught but those students
subsequently, in follow up courses, do poorer than the more rigorous
professors who do more than teach to the standardized test. Sounds
reasonable to me.
This got me thinking more about student evaluations
and some other evidence. Specifically, I would note that student evaluations
began to become popular during the 1960s and early 1970s as a common
evaluation tool for faculty. I would also note that most of the great grade
inflation in America has occurred since evaluations began, with national
grade point averages probably rising from the 2.5 or 2.6 range in about 1960
to well over 3.0 today (admittedly, this is based on limited but I believe
likely correct evidence). Professors to some extent can "buy" good
evaluations by giving high grades, so the evaluation process is probably a
major factor in grade inflation.
So what? What difference does it really make if the
average grade is a B- or C+ instead of a B or B+? This is where another
working paper of the National Bureau of Economic Research comes in. Philip
Babcock and Mindy Marks present evidence in Working Paper 15954 that in
1961, the average student spent 40 hours a week engaged in their
studies—attending class and studying. By 2003, this had declined by nearly
one-third to 27 hours weekly.
One advantage of getting old is that you gain some
historical perspective, and I have been in higher education for over a half
of century and believe that Babcock and Marks are right. Students do less
reading, less studying, even less attending class than two generations ago.
Why? They don't have to do more. With relatively little work they can get
relatively high grades—say a B or even better. And student evaluations are
one factor in explaining the underlying grade inflation problem. Go to the
campusbuddy.com Web site and see for yourself evidence on the
grade-inflation phenomenon. The colleges of education, which in my judgment
should be put out of business (topic for another blog), are the worst
offenders, but the problem is pretty universal.
College is getting more expensive all the time—and
students are consuming less of it per year as measured by time usage. The
cost of college per hour spent in studying is rising a good deal faster than
what tuition data alone suggest. Why should the public subsidize mostly
middle-class kids working perhaps 900 hours a year (half the average of
American workers) on their studies?
What to do? We could move to reduce the impact of
student evaluations, or even eliminate them. One reason for their
existence—to convey knowledge to students about professor—is usually met
separately by other means, such as the
RateMyProfessors.com Web site. Alternatively,
colleges could by mandate or the use of financial incentives encourage
faculty to become more rigorous in their grading. If state subsidies started
to vary inversely in size with grade-point averages, state schools would
quickly reduce grade inflation. In any case, we need more research into WHY
students today are working less. But I would bet a few bucks that grade
inflation and student evaluations are part of the answer
Bob Jensen attributes most of the grade inflation problem in North America to
teaching evaluations that greatly impact hiring (for faculty seek a new
employer), promotion, tenure, and other factors affected by performance
evaluations. In fact I call grade inflation the Number One Disgrace in Higher
Education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Now that I'm retired, I've cherry picked from the stacks of teaching
evaluations and plan to carry only the best outcomes when I eventually confront
St Peter at the
Pearly Gates.
But there's a nasty rumor among my retired professor friends that St Peter has
online access to grading distributions. Better watch out!
Questions
Was she really so tough as to be removed from classroom teaching by LSU?
Should she teach in a way that improves the odds that guessing can lead to a
better course grade?
Note that she is a tenured faculty member at LSU. She probably wouldn't dare
be so tough if she did not have tenure.
Louisiana State U. removes a tough grader from her
course mid-semester, and raises the grades of her students. Faculty leaders see
a betrayal of values and due process.
"Who Really Failed? April 15, 2010 Dominique G. Homberger won't apologize for
setting high expectations for her students," by Scott Jaschik, Inside Higher
Ed, April 15, 2010 ---
http://www.insidehighered.com/news/2010/04/15/lsu
The biology professor at Louisiana State University
at Baton Rouge gives brief quizzes at the beginning of every class, to
assure attendance and to make sure students are doing the reading. On her
tests, she doesn't use a curve, as she believes that students must achieve
mastery of the subject matter, not just achieve more mastery than the worst
students in the course. For multiple choice questions, she gives 10 possible
answers, not the expected 4, as she doesn't want students to get very far
with guessing.
Students in introductory biology don't need to
worry about meeting her standards anymore. LSU removed her from teaching,
mid-semester, and raised the grades of students in the class. In so doing,
the university's administration has set off a debate about grade inflation,
due process and a professor's right to set standards in her own course.
To Homberger and her supporters, the university's
action has violated principles of academic freedom and weakened the faculty.
"This is terrible. It undercuts all of what we do,"
said Brooks Ellwood, president of the LSU Chapter of the American
Association of University Professors, and the Robey H. Clark Distinguished
Professor of Geology. "If you are a non-tenured professor at this
university, you have to think very seriously about whether you are going to
fail too many students for the administration to tolerate."
Even for those who, like Homberger, are tenured,
there is a risk of losing the ability to stick to your standards, he said.
Teaching geology, he said, he has found that there are students who get
upset when he talks about the actual age of the earth and about evolution.
"Now students can complain to a dean" and have him removed, Ellwood said. "I
worry that my ability to teach in the classroom has been diminished."
Kevin Carman, dean of the College of Basic
Sciences, did not respond to requests for a phone interview Wednesday. But
he issued a statement through the university's public relations office that
said: "LSU takes academic freedom very seriously, but it takes the needs of
its students seriously as well. There was an issue with this particular
class that we felt needed to be addressed.
"The class in question is an entry-level biology
class for non-science majors, and, at mid-term, more than 90 percent of the
students in Dr. Homberger's class were failing or had dropped the class. The
extreme nature of the grading raised a concern, and we felt it was important
to take some action to ensure that our students receive a rigorous, but
fair, education. Professor Homberger is not being penalized in any way; her
salary has not been decreased nor has any aspect of her appointment been
changed."
In an interview, Homberger said that there were
numerous flaws with Carman's statement. She said that it was true that most
students failed the first of four exams in the course. But she also said
that she told the students that -- despite her tough grading policies -- she
believes in giving credit to those who improve over the course of the
semester.
At the point that she was removed, she said, some
students in the course might not have been able to do much better than a D,
but every student could have earned a passing grade. Further, she said that
her tough policy was already having an impact, and that the grades on her
second test were much higher (she was removed from teaching right after she
gave that exam), and that quiz scores were up sharply. Students got the
message from her first test, and were working harder, she said.
"I believe in these students. They are capable,"
she said. And given that LSU boasts of being the state flagship, she said,
she should hold students to high standards. Many of these students are in
their first year, and are taking their first college-level science course,
so there is an adjustment for them to make, Homberger said. But that doesn't
mean professors should lower standards.
Homberger said she was told that some students had
complained about her grades on the first test. "We are listening to the
students who make excuses, and this is unfair to the other students," she
said. "I think it's unfair to the students" to send a message that the way
to deal with a difficult learning situation is "to complain" rather than to
study harder.
Further, she said that she was never informed that
administrators had any concerns about her course until she received a
notification that she was no longer teaching it. (She noted that the
university's learning management system allowed superiors to review the
grades on her first test in the course.)
And while her dean authorized her removal from
teaching the course, she said, he never once sat in on her course. Further,
she said that in more than 30 years of teaching at LSU, no dean had ever
done so, although they would have been welcome.
"Why didn't they talk to me?" she asked.
Homberger said that she has not had any serious
grading disputes before, although it's been about 15 years since she taught
an introductory course. She has been teaching senior-level and graduate
courses, and this year, she asked her department's leaders where they could
use help, and accepted their suggestion that she take on the intro course.
In discussions with colleagues after she was
removed from the course, Homberger said that no one has ever questioned
whether any of the test questions were unfair or unfairly graded, but that
she was told that she may include "too many facts" on her tests.
Ellwood, the campus AAUP chapter president, said
that his group had verified that no one informed Homberger of concerns
before removing her from the course, and that no one had questioned the
integrity of her tests. He also said that the scores on the second test were
notably better than on the first one, suggesting that students were
responding to the need to do more work. "She's very rigorous. There's no
doubt about that," he said.
Based on its investigation, the AAUP chapter has
sent a letter to administrators, arguing that they violated Homberger's
academic freedom and due process rights and demanding an apology. (No
apology has been forthcoming.)
Cary Nelson, national president of the AAUP, said
that the organization has always believed that "an instructor has the
responsibility for assigning grades," and that the LSU case was "disturbing
in several respects." He noted that "the practice of assigning tough grades
in an early assignment as a wake-up call to students is quite common" and
that "the instructor made it clear that she had no intention of failing that
many students when it came time for final grades."
If administrators were concerned, he said, they had
a responsibility to "discuss the matter fully with the instructor" before
taking any action. And he said that "removal from the classroom mid-semester
is a serious sanction that requires all the protections of due process."
Nelson said that the incident "raises serious questions about violations of
pedagogical freedoms."
Stuart Rojstaczer, a former Duke University
professor who is the founder of
GradeInflation.com,
a Web site that publishes research on grading,
questioned whether LSU was really trying to help students. "How many times
has Dean Carman removed a professor from a class who was giving more than 90
percent As?" he asked.
LSU's public affairs office did not respond to
follow-up questions about the statement it issued, and to the criticisms
made by various faculty members.
Homberger declined to give out the names of
students who have expressed support, saying that to do so would violate her
confidentiality obligations. But she released (without student names)
answers to a bonus question on the course's second test. The question asked
students to describe "the biggest 'AHA' reaction" they had had during the
course.
Many of the reactions were about various issues in
biology -- with evolution as a major topic. But a number dealt with grades
and work habits. One was critical: "When I found out my test grade, I almost
had a heart attack."
But many other comments about the course standards
were positive, with several students specifically praising Homberger's
advice that they form study groups. One student wrote: “My biggest
AHA‐reaction in this course is that I need to study for this course every
night to make a good grade. I must also attend class, take good notes, and
have study sessions with others. Usually a little studying can get me by but
not with this class which is why it is my AHA‐reaction."
Jensen Comment
Only four students have complained about her to date on RateMyProfessor, which
is not enough to base any kind of an opinion. One student reports that a grade
of 70 on a quiz gave him a rank of 20 out of 217 students. This kind of thing
happened to me all along, but I curved the results such that a 70 could actually
be an A grade. Another student complained that she did not give them the answers
on Moodle in advance. Say What?
Get better teaching evaluations in Lake Wobegon by grading everybody above
average no matter what. Give all A grades and keep keep them happy at LSU.
Grade Inflation is the Number One Scandal of Higher Education (in my
viewpoint)
Bob Jensen's threads on grade inflation are at
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
"The Coming Meltdown in Higher Education," by Seth Godin, Chronicle
of Higher Education, May 3, 2010 ---
http://chronicle.com/article/The-Coming-Meltdown-in-Higher/65398/
For 400 years, higher education in the United
States has been on a roll. From Harvard asking Galileo to be a guest
professor in the 1600s to millions tuning in to watch a team of unpaid
athletes play another team of unpaid athletes in some college sporting
event, the amounts of time and money and prestige in the college world have
been climbing.
I'm afraid that's about to crash and burn. Here's
how I'm looking at it.
Most undergraduate college and university programs
are organized to give an average education to average students.
Pick up any college brochure or catalog. Delete the
brand names and the map. Can you tell which college it is? While there are
outliers (like St. John's College, in Maryland, Deep Springs College, and
Full Sail University), most colleges aren't really outliers. They are mass
marketers.
Stop for a second and consider the impact of that
choice. By emphasizing mass and sameness and rankings, colleges have changed
their missions.
This works great in an industrial economy where we
can't churn out standardized students fast enough, and where the demand is
huge because the premium earned by a college graduate dwarfs the cost. But
...
College has gotten expensive far faster than wages
have gone up.
As a result, millions of people are in very serious
debt, debt so big it might take decades to repay. Word gets around. Won't
get fooled again.
This leads to a crop of potential college students
who can (and will) no longer just blindly go to the "best" school they get
into.
The definition of "best" is under siege.
Why do colleges send millions (!) of
undifferentiated pieces of junk mail to high-school students now? We will
waive the admission fee! We have a one-page application! Apply! This is some
of the most amateur and bland direct mail I've ever seen. Why do it?
Biggest reason: So colleges can reject more
applicants. The more applicants they reject, the higher they rank in U.S.
News and other rankings. And thus the rush to game the rankings continues,
which is a sign that the marketers in question (the colleges) are getting
desperate for more than their fair share. Why bother making your education
more useful if you can more easily make it appear to be more useful?
The correlation between a typical college degree
and success is suspect.
College wasn't originally designed to be merely a
continuation of high school (but with more binge drinking). In many places,
though, that's what it has become. The data I'm seeing show that a degree
(from one of those famous schools, with or without a football team) doesn't
translate into significantly better career opportunities, a better job, or
more happiness than does a degree from a cheaper institution.
Accreditation isn't the solution, it's the problem.
A lot of these ills are the result of uniform
accreditation programs that have pushed high-cost, low-return policies on
institutions and rewarded colleges that churn out young wannabe professors
instead of creating experiences that turn out leaders and problem solvers.
Just as we're watching the disintegration of
old-school marketers with mass-market products, I think we're about to see
significant cracks in old-school colleges with mass-market degrees.
Back before the digital revolution, access to
information was an issue. The size of the library mattered. One reason to go
to college was to get access. Today that access is worth a lot less. The
valuable things that students take away from college are interactions with
great minds (usually professors who actually teach and actually care) and
non-class activities that shape them as people. The question I'd ask: Is the
money that mass-marketing colleges spend on marketing themselves and making
themselves bigger well spent? Are they organizing for changing lives or for
ranking high? Does NYU have to get so much bigger? Why?
The solutions are obvious. There are tons of ways
to get a cheap liberal education, one that exposes you to the world, permits
you to have significant interactions with people who matter, and teaches you
to make a difference (see DIY U: Edupunks, Edupreneurs, and the Coming
Transformation of Higher Education, by Anya Kamenetz). Most of these ways,
though, aren't heavily marketed, nor do they involve going to a
tradition-steeped 200-year-old institution with a wrestling team. Things
like gap years, research internships, and entrepreneurial or social ventures
after high school are opening doors for students who are eager to discover
the new.
The only people who haven't gotten the memo are
anxious helicopter parents, mass-marketing colleges, and traditional
employers. And all three are waking up and facing new circumstances.
Seth Godin is the author of 12 books, including Linchpin: Are You
Indispensable?, published this year by Portfolio. He is founder and CEO of
Squidoo.com, a publishing platform that allows users to generate Web pages
on any subject of their choosing. This article is reprinted from his blog.
Bob Jensen's threads on our compassless colleges ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Berkowitz
Bob Jensen's threads on the universal disgrace of grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Update on Wal-Mart University
Does this pass the Academy’s smell test?
"Wal-Mart Employees Get New College Program—Online," by Marc Parry,
Chronicle of Higher Education, June 3, 2010 ---
http://chronicle.com/blogPost/Wal-Mart-Employees-Get-New/24504/?sid=at&utm_source=at&utm_medium=en
The American Public
University System
has been described as a higher-education version
of Wal-Mart: a publicly traded corporation that mass-markets moderately priced
degrees in many fields.
Now it's more than an
analogy. Under a deal
announced today, the for-profit online university
will offer Wal-Mart workers discounted tuition and credit for job experience.
Such alliances are
nothing new; see these materials from
Strayer and
Capella for other examples. But Wal-Mart is the
country's largest retailer. And the company is pledging to spend $50-million
over three years to help employees cover the cost of tuition and books beyond
the discounted rate, according to the
Associated Press.
"What's most significant
about this is that, given that APU is very small, this is a deal that has the
potential to drive enrollments that are above what investors are already
expecting from them," Trace A. Urdan, an analyst with Signal Hill Capital Group,
told Wired Campus. "Which is why the stock is up."
Wal-Mart workers will be
able to receive credit—without having to pay for it—for job training in subjects
like ethics and retail inventory management, according to the AP.
Wal-Mart employs 1.4
million people in the U.S. Roughly half of them have a high-school diploma but
no college degree, according to
The New York Times. A department-level
manager would end up paying about $7,900 for an associate degree, factoring in
the work credits and tuition discount, the newspaper reported.
“If 10 to 15 percent of
employees take advantage of this, that’s like graduating three Ohio State
Universities,” Sara Martinez Tucker, a former under secretary of education who
is now on Wal-Mart’s external advisory council, told the Times.
"News Analysis: Is 'Wal-Mart U.' a Good Bargain for Students?" by Marc
Parry, Chronicle of Higher Education, June 13, 2010 ---
http://chronicle.com/article/Is-Wal-Mart-U-a-Good/65933/?sid=at&utm_source=at&utm_medium=en
There might have been a
Wal-Mart University.
As the world's largest
retailer weighed its options for making a big splash in education, executives
told one potential academic partner that Wal-Mart Stores was considering buying
a university or starting its own.
"Wal-Mart U." never
happened. Instead, the retailer chose a third option: a landmark alliance that
will make a little-known for-profit institution, American Public University, the
favored online-education provider to Wal-Mart's 1.4 million workers in the
United States.
A closer look at the deal
announced this month shows how American Public slashed its prices and adapted
its curriculum to snare a corporate client that could transform its business. It
also raises one basic question: Is this a good bargain for students?
Adult-learning leaders
praise Wal-Mart, the nation's largest private employer, for investing in
education. But some of those same experts wonder how low-paid workers will be
able to afford the cost of a degree from the private Web-based university the
company selected as a partner, and why Wal-Mart chose American Public when
community-college options might be cheaper. They also question how easily
workers will be able to transfer APU credits to other colleges, given that the
university plans to count significant amounts of Wal-Mart job training and
experience as academic credit toward its degrees.
For example, cashiers
with one year's experience could get six credits for an American Public class
called "Customer Relations," provided they received an "on target" or "above
target" on their last performance evaluation, said Deisha Galberth, a Wal-Mart
spokeswoman. A department manager's training and experience could be worth 24
credit hours toward courses like retail ethics, organizational fundamentals, or
human-resource fundamentals, she said.
Altogether, employees
could earn up to 45 percent of the credit for an associate or bachelor's degree
at APU "based on what they have learned in their career at Wal-Mart," according
to the retailer's Web site.
Janet K. Poley, president
of the American Distance Education Consortium, points out that this arrangement
could saddle Wal-Mart employees with a "nontransferable coupon," as one blogger
has described it.
"I now see where the
'trick' is—if a person gets credit for Wal-Mart courses and Wal-Mart work, they
aren't likely to be able to transfer those to much of anyplace else," Ms. Poley
wrote in an e-mail to The Chronicle. Transferability could be important, given
the high turnover rate in the retail industry.
Inside the Deal Wal-Mart
screened 81 colleges before signing its deal with American Public University.
One that talked extensively with the retailer was University of Maryland
University College, a 94,000-student state institution that is a national leader
in online education. According to University College's president, Susan C.
Aldridge, it was during early discussions that Wal-Mart executives told her the
company was considering whether it should buy a college or create its own
college.
When asked to confirm
that, Ms. Galberth said only that Wal-Mart "brainstormed every possible option
for providing our associates with a convenient and affordable way to attend
college while working at Wal-Mart and Sam's Club," which is also owned by
Wal-Mart Stores. "We chose to partner with APU to reach this goal. We have no
plans to purchase a brick-and-mortar university or enter the online education
business," she said.
The Wal-Mart deal was
something of a coming-out party for American Public University. The institution
is part of a 70,000-student system that also includes American Military
University and that largely enrolls active-duty military personnel. As American
Public turned its attention to luring the retail behemoth, it was apparently
able to be more flexible than other colleges and willing to "go the extra mile"
to accommodate Wal-Mart, said Jeffrey M. Silber, a stock analyst and managing
director of BMO Capital Markets. That flexibility included customizing programs.
APU has a management degree with courses in retail, and its deans worked with
Wal-Mart to add more courses to build a retail concentration, said Wallace E.
Boston, the system's president and chief executive.
It also enticed Wal-Mart
with a stable technology platform; tuition prices that don't vary across state
lines, as they do for public colleges; and online degrees in fields that would
be attractive to workers, like transportation logistics.
Unlike American Public,
Maryland's University College would not put a deep discount on the table.
Credit for Wal-Mart work
was also an issue, Ms. Aldridge said.
"We feel very strongly
that any university academic credit that's given for training needs to be
training or experience at the university level," Ms. Aldridge said. "And we have
some very set standards in that regard. And I'm not certain that we would have
been able to offer a significant amount of university credit for some of the
on-the-job training that was provided there."
Awarding credit for
college-level learning gained outside the classroom is a long-standing practice,
one embraced by about 60 percent of higher-education institutions, according to
the most recent survey by the Council for Adult And Experiential Learning. A
student might translate any number of experiences into credit: job training,
military service, hobbies, volunteer service, travel, civic activities.
Pamela J. Tate, president
and chief executive of the council, said what's important isn't the percentage
of credits students get from prior learning—a number that can vary widely.
What's important, she said, is that students can demonstrate knowledge. Workers
might know how they keep the books at a company, she explained. But that doesn't
automatically mean they've learned the material of a college accounting course.
Karan Powell, senior vice
president and academic dean at American Public University system, said credit
evaluation at her institution "is a serious, rigorous, and conservative
process." But will the credits transfer? "Every college or university
establishes its own transfer-credit policies as they apply to experiential
learning as well as credit from other institutions," she said in an e-mail.
"Therefore, it would depend on the school to which a Wal-Mart employee wanted to
transfer."
Affordable on $12 an
Hour? Then there's the question of whether low-wage workers will be able to
afford the degrees. One of the key features of this deal is the discount that
Wal-Mart negotiated with American Public.
"Wal-Mart is bringing the
same procurement policies to education that it brings to toothpaste," said John
F. Ebersole, president of Excelsior College, a distance-learning institution
based in New York.
American Public
University's tuition was already cheap by for-profit standards and competitive
with other nonprofit college options. It agreed to go even cheaper for Wal-Mart,
offering grants equal to 15 percent of tuition for the company's workers. Those
employees will pay about $11,700 for an associate degree and $24,000 for a
bachelor's degree.
But several experts
pointed out that public colleges might provide a more affordable option.
The Western Association
of Food Chains, for example, has a partnership with 135 community colleges in
the western United States to offer an associate degree in retail management
completely online, Ms. Tate said. Many of the colleges also grant credit for
prior learning. Though the tuition varies by state, the average tuition cost to
earn the degree is about $4,500, she said. By contrast, she said, the American
Public degree is "really expensive" for a front-line worker who might make $12
an hour.
"What I couldn't figure
out is how they would be able to afford it unless Wal-Mart was going to pay a
substantial part of the tuition," she said. "If not, then what you've got is
this program that looks really good, but the actual cost to the person is a
whole lot more than if they were going to go to community college and get their
prior learning credits assessed there."
How the retailer might
subsidize its employees' education is an open question. In announcing the
program, Wal-Mart pledged to spend up to $50-million over the next three years
"to provide tuition assistance and other tools to help associates prepare for
college-level work and complete their degrees."
Alicia Ledlie, the senior
director at Wal-Mart who has been shepherding this effort, told The Chronicle in
an e-mail that the company is "right now working through the design of those
programs and how they will benefit associates," with more details to be released
later this summer.
One thing is clear: The
deal has a big financial impact on American Public. Wal-Mart estimates that
about 700,000 of its 1.4 million American employees lack a college degree.
Sara Martinez Tucker, a
former under secretary of education who is now on Wal-Mart's external advisory
council, suggests 10 or 15 percent of Wal-Mart associates could sign up.
"That's 140,000 college
degrees," she told The Chronicle. "Imagine three Ohio State Universities' worth
of graduates, which is huge in American higher education."
Jensen Comment
This Wal-Mart Fringe Benefit Should Be Carefully Investigated by Employees
It does not sit well with me!
- If Wal-Mart
would pay the same amount of benefit for online state university degrees
(e.g., the University of Wisconsin has over 100,000 online students) as the
for-profit American Public University that charges higher tuition even at a
Wal-Mart discount, why would a student choose the less prestigious and
relatively unknown American Public University? Possibly American Public wins
out because it's easier to get A & B grades with less academic ability and
less work.
"Want a Higher G.P.A.? Go to a Private College: A 50-year rise in
grade-point averages is being fueled by private institutions, a recent study
finds," by Catherine Rampell. The New York Times, April 19, 2010 ---
http://finance.yahoo.com/college-education/article/109339/want-a-higher-gpa-go-to-a-private-college?mod=edu-collegeprep
- I certainly hope
that the Wal-Mart contributions toward tuition can be extended to
state-supported colleges and universities having more respected credits. For
example, online degrees from the University of Wisconsin or the University
of Maryland are are likely much more respected for job mobility and for
acceptance into graduate schools.
- Giving credit
for "job experience" is an absolute turn off for me. Most adults have some
form of "job experience." This is just not equivalent to course credit
experience in college where students face examinations and academic
projects. Weaker colleges generally use credit for "job experience" ploy as
a come on to attract applicants. But the credits awarded for job experience
are not likely to be transferrable to traditional colleges and universities.
- The "discounted
tuition" in this for-profit online program is likely to be higher than the
in-state tuition from state-supported colleges and universities.
- I'm dubious
about the standards for admission in for-profit colleges as well as the
rigor of the courses. Watch the Frontline video served up by PBS.
On May 4, 2010, PBS Frontline broadcast an hour-long video called College
Inc. --- a sobering analysis of for-profit onsite and online colleges and
universities.
For a time you can watch the video free online
---
Click Here
http://www.pbs.org/wgbh/pages/frontline/collegeinc/view/?utm_campaign=viewpage&utm_medium=toparea&utm_source=toparea
- The American
Public University System is accredited by the North Central Association
accrediting agency that is now under investigation for weakened standards
for college credits.
"Inspector General Keeps the Pressure on a Regional
Accreditor," by Eric Kelderman, Chronicle of Higher Education, May 27,
2010 ---
http://chronicle.com/article/Inspector-General-Keeps-the/65691/?sid=at&utm_source=at&utm_medium=en
The inspector general of
the U.S. Department of Education has reaffirmed a recommendation that the
department should consider sanctions for the Higher Learning Commission of the
North Central Association of Colleges and Schools, one of the nation's major
regional accrediting organizations. In a
report this week, the Office of Inspector General
issued its final recommendations stemming from a
2009 examination of the commission's standards
for measuring credit hours and program length, and affirmed its earlier critique
that the commission had been too lax in its standards for determining the amount
of credit a student receives for course work.
The Higher Learning
Commission accredits more than 1,000 institutions in 19 states. The Office of
Inspector General completed similar reports for two other regional accreditors
late last year but did not suggest any sanctions for those organizations.
Possible sanctions
against an accreditor include limiting, suspending, or terminating its
recognition by the secretary of education as a reliable authority for
determining the quality of education at the institutions it accredits. Colleges
need accreditation from a federally recognized agency in order to be eligible to
participate in the federal student-aid programs.
In its examination of
the Higher Learning Commission, the office looked at the commission's
reaccreditation of six member institutions: Baker College, DePaul University,
Kaplan University, Ohio State University, the University of Minnesota-Twin
Cities, and the University of Phoenix. The office chose those institutions—two
public, two private, and two proprietary institutions—as those that received the
highest amounts of federal funds under Title IV, the section of the Higher
Education Act that governs the federal student-aid programs.
It also reviewed the
accreditation status of American InterContinental University and the Art
Institute of Colorado, two institutions that had sought initial accreditation
from the commission during the period the office studied.
The review found that
the Higher Learning Commission "does not have an established definition of a
credit hour or minimum requirements for program length and the assignment of
credit hours," the report says. "The lack of a credit-hour definition and
minimum requirements could result in inflated credit hours, the improper
designation of full-time student status, and the over-awarding of Title IV
funds," the office concluded in its letter to the commission's president, Sylvia
Manning.
More important, the
office reported that the commission had allowed American InterContinental
University to become accredited in 2009 despite having an "egregious" credit
policy.
In a letter responding
to the commission, Ms. Manning wrote that the inspector general had ignored the
limitations the accreditor had placed on American InterContinental to ensure
that the institution improved its standards, an effort that had achieved the
intended results, she said. "These restrictions were intended to force change at
the institution and force it quickly."
Continued in article
Jensen Comment
The most successful for-profit universities advertise heavily about credibility
due to being "regionally accredited." In some cases this accreditation was
initially bought rather than achieved such as by buying up a small, albeit still
accredited, bankrupt not-for-profit private college that's washed up on the
beach. This begs the question about how some for-profit universities maintain
the spirit of accreditation acquired in this manner.
Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/assess.htm
"Want a Higher G.P.A.? Go to a Private College: A 50-year rise in
grade-point averages is being fueled by private institutions, a recent study
finds," by Catherine Rampell. The New York Times, April 19, 2010 ---
http://finance.yahoo.com/college-education/article/109339/want-a-higher-gpa-go-to-a-private-college?mod=edu-collegeprep
Over the last 50 years,
college grade-point averages have risen about 0.1 points per decade, with
private schools fueling the most grade inflation, a recent study finds.
The study, by Stuart
Rojstaczer and Christopher Healy, uses historical data from 80 four-year
colleges and universities. It finds that G.P.A.'s have risen from a national
average of 2.52 in the 1950s to about 3.11 by the middle of the last decade.
For the first half of the
20th century, grading at private schools and public schools rose more or less in
tandem. But starting in the 1950s, grading at public and private schools began
to diverge. Students at private schools started receiving significantly higher
grades than those received by their equally-qualified peers -- based on SAT
scores and other measures -- at public schools.
In other words, both
categories of schools inflated their grades, but private schools inflated their
grades more.
Based on contemporary
grading data the authors collected from 160 schools, the average G.P.A. at
private colleges and universities today is 3.3. At public schools, it is 3.0.
The authors suggest that
these laxer grading standards may help explain why private school students are
over-represented in top medical, business and law schools and certain Ph.D.
programs: Admissions officers are fooled by private school students' especially
inflated grades.
Additionally, the study
found, science departments today grade on average 0.4 points lower than
humanities departments, and 0.2 points lower than social science departments.
Such harsher grading for the sciences appears to have existed for at least 40
years, and perhaps much longer.
Relatively lower grades
in the sciences discourage American students from studying such disciplines, the
authors argue.
"Partly because of our
current ad hoc grading system, it is not surprising that the U.S. has to rely
heavily upon foreign-born graduate students for technical fields of research and
upon foreign-born employees in its technology firms," they write.
These overall trends, if
not the specific numbers, are no surprise to anyone who has followed the debates
about grade inflation. But so long as schools believe that granting higher
grades advantages their alumni, there will be little or no incentive to impose
stricter grading standards unilaterally.
"Employers Favor State Schools for Hires," by Jennifer Merritt, The
Wall Street Journal, September 13, 2010 ---
http://online.wsj.com/article/SB10001424052748703597204575483730506372718.html
U.S. companies largely favor graduates of big state
universities over Ivy League and other elite liberal-arts schools when
hiring to fill entry-level jobs, a Wall Street Journal study found.
In the study—which surveyed 479 of the largest
public and private companies, nonprofits and government
agencies—Pennsylvania State University, Texas A&M University and University
of Illinois at Urbana-Champaign ranked as top picks for graduates best
prepared and most able to succeed.
Of the top 25 schools as rated by these employers,
19 were public, one was Ivy League (Cornell University) and the rest were
private, including Carnegie Mellon and University of Notre Dame.
The Journal research represents a systematic effort
to assess colleges by surveying employers' recruiters—who decide where to
seek out new hires—instead of relying primarily on measures such as student
test scores, college admission rates or graduates' starting salaries. As a
group, the survey participants hired more than 43,000 new graduates in the
past year.
The recruiters' perceptions matter all the more
given that employers today are visiting fewer schools, partly due to the
weak economy. Instead of casting a wide net, the Journal found, big
employers are focusing more intently on nearby or strategically located
research institutions with whom they can forge deeper partnerships with
faculty.
The Journal study didn't examine smaller companies
because they generally don't interact with as many colleges. In addition,
the survey focused on hiring students with bachelor's as opposed to graduate
degrees.
The research highlighted a split in perception
about state and private schools. Recruiters who named an Ivy League or elite
liberal-arts school as a top pick say they prize their graduates' intellect
and cachet among clients, as well as "soft skills" like critical thinking
and communication. But many companies said they need people with practical
skills to serve as operations managers, product developers, business
analysts and engineers. For those employees—the bulk of their work
force—they turn to state institutions or other private schools offering
that.
Jensen Comment
I have two (largely untested) theories on employer preference for graduates of
state universities. Firstly, I think state universities are preferred for hiring
over for-profit universities because prospective employers have doubts about
admission standards, curricula, grade inflation, and academic rigor of virtually
all for-profit universities. Secondly, I think prospective employers know there
is significant grade inflation in both non-profit private and public colleges,
but employers are more suspicious of worse grade inflation in non-profit private
colleges, especially small private colleges that perhaps are favored by high
school graduates fearful of the grading competition in state universities.
“Gaming for GPA” by Bob Jensen
So your goal in education is a gpa
That’s as close as possible to an average of A;
First you enroll in an almost unknown and easy private college
Where your transcript records accumulated knowledge.
But take the hardest courses in prestigious schools
Where you accumulate transfer credit pools;
Then transfer the A credits to your transcript cool
And bury the other credits where you were a fool.
And when the Great Scorer comes to write against your name
It’s not a question of whether you won or went lame;
You always win if you know how to play the game
And for a lifetime there’s no transcript record of your shame.
(but you may not
win if prospective employers suspect you played this game)
"Want a Higher G.P.A.? Go to a
Private College: A 50-year rise in grade-point averages is being fueled by
private institutions, a recent study finds," by Catherine Rampell. The New
York Times, April 19, 2010 ---
http://finance.yahoo.com/college-education/article/109339/want-a-higher-gpa-go-to-a-private-college?mod=edu-collegeprep
Grade Inflation is the Number One Disgrace in Higher Education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
Buying grades is also common in some foreign universities ---
http://works.bepress.com/cgi/viewcontent.cgi?article=1000&context=vincent_johnson
Red-Hot Chili Peppers on RateMyProfessor
August 13, 2010 message from David Albrecht
[albrecht@PROFALBRECHT.COM]
If you (like me) have never
received a red hot chili pepper at ratemyprofessors.com (Bob, did you ever
get one?), then perhaps this article in the CHE might be of interest. It
probably needs a subscription to read the entire article. I'll paste enough
to give you the idea, though.
Does grade inflation at the privates extend to ratings by students of
professors?
Dave Albrecht
http://chronicle.com/article/RateMyProfessorsAppearancecom/124336/?sid=cr&utm_source=cr&utm_medium=en
September 12, 2010
August 13, 2010 reply from Bob Jensen
I can’t even find myself on RMP let alone brag that
I received a red-hot chili pepper. I’m 100% certain that I was never
any-colored chili pepper. Some of my former colleagues at Trinity do,
however, have red chili peppers beside their names ---
http://www.ratemyprofessors.com/
The reason Bob Jensen would never have been a red
chili pepper is that his students had to learn a lot of tough topics like
hedge accounting on their own! I hate to throw a wet blanket on red chili
peppers. However, I do want to point out the book “Measure Learning Rather
than Satisfaction in Higher Education.” This is not to imply that satisfied
students do not learn and much or more than students who grumble that
“everything I had to learn in this #X%&#Z course I had to learn by myself”
---
http://faculty.trinity.edu/rjensen/265wp.htm
Measure Learning Rather than Satisfaction in Higher
Education, Edited by Ronald E. Flinn and D. Larry Crumbley (American
Accounting Association Teaching, Learning, and Curriculum Section, 2009).
ISBN 0-86539-093-2 The book is free to TLC dues-paying members. Others can
purchase the book from
http://aaahq.org/market.cfm
But I would’ve loved to be more loved by my
students.
More often than not I was cursed by my students.
Bob Jensen
Bob Jensen's threads on grade inflation ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
And
http://faculty.trinity.edu/rjensen/Assess.htm#RateMyProfessor
Professors of the Year
The Council for Advancement and Support of Education
and the Carnegie Foundation for the Advancement of Teaching announced today
winners of their annual
U.S. Professors of the Year award, given to
instructors who show dedication to undergraduate teaching and mentoring.
Elia Powers, Inside Higher Ed, November 15, 2007 ---
http://www.insidehighered.com/news/2007/11/15/topprofs
Jensen Comment
Although "professors of the year" are chosen by peers are often teach popular
courses, there are possibly more popular courses that are taught by instructors
who will never win awards given by peers.
It is somewhat revealing (a little about the professor and a lot about the
RateMyProfessor site) to read the student comments on RateMyProfessor. The
"hottest" professors at RateMyProfessor generally have many more evaluations
submitted than the four Professors of the Year" listed below. You can find a
listing of the "hottest" professors (Top 50) at
http://www.ratemyprofessors.com/top50Profs.jsp?from=1&to=25&tab=hottest_top50
- The Rank 1 U.S. Professor of the Year as ranked by peers and judges is
Glen Ellis at Smith College. He only has seven student evaluations at
RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=191487
- The Rank 2 U.S. Professor of the Year as ranked by peers and judges is
Rosemary Karr at Collin County Community College in Texas. She only has 25
student evaluations RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=207154
I had to chuckle at the student who said:
"I got a 68 in her class
and went to her office for tutorials 3 times a week, still didnt pass me.
she pickes favorites."
- The Rank 3 U.S. Professor of the Year as ranked by peers and judges is
Chris Sorensen at Kansas State University. There are 760 instructors
evaluated from KSU on RateMyProfessor, but apparently not one of Sorensen's
students submitted an evaluation. There were 11 professors with evaluations
from Sorensen's Department of Physics, but Sorensen was not on the list.
- The Rank 4 U.S. Professor of the Year as ranked by peers and judges is
Carlos G. Spaht at Louisiana State University in Shreveport. He only has 16
student evaluations RateMyProfessor and you can read the outcomes at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=329076
For Trivia Buffs and Serious Researchers
Thousands of College Instructors Ranked on Just About Everything
November 13, 2007 message from David Albrecht
[albrecht@PROFALBRECHT.COM]
There is a popular teacher in my department. When
this fellow teaches a section of a multi-section course, his section fills
immediately and there is a waiting list. My department does not like an
imbalance in class size, so they monitor enrollment in his section. No one
is permitted to add his section until all other sections have at least one
more students than his.
I'm concerned about student choice, about giving
them a fair chance to get into his section instead of the current random
timing of a spot opening up in his section.
Does anyone else have this situation at your
school? How do you manage student sign-ups for a popular teacher? Any
practical suggestions would be greatly appreciated.
David Albrecht
Bowling Green
November 14, 2007 reply from Bob Jensen
Hi David,
I think the first thing to study is what makes an instructor so popular.
There can be good reasons (tremendous preparation, inspirational, caring,
knowing each student) and bad reasons (easy grader, no need to attend
class), and questionable without ipso facto being good or bad (entertaining,
humorous).
The RateMyProfessor site now has some information on most college
instructors in a number of nations ---
http://www.ratemyprofessors.com/index.jsp The overwhelming factor
leading to popularity is grading since the number one concern in college
revealed by students is grading. Of course there are many problems in this
database and many instructors and administrators refuse to even look at
these RateMyProfessor archives. Firstly, student reporting is self
selective. The majority of students in any class do not submit evaluations.
A fringe element (often outliers for and against) tends to provide most of
the information. Since colleges do know the class sizes, it is possible to
get an idea about "sample" size, although these are definitely not a random
samples. It's a little like book and product reviews in Amazon.com.
There are both instructors who are not rated at all on RateMyProfessor
and others who are too thinly rated (e.g., less than ten evaluations) to
have their evaluations taken seriously. For example, one of my favorite
enthusiastic teachers is the award-winning Amy Dunbar who teaches tax at the
University of Connecticut. Currently there are 82 instructors in the
RateMyProfessor archives who are named Dunbar. But not a single student
evaluation has apparently been sent in by the fortunate students of Amy
Dunbar. Another one of my favorites is Dennis Beresford at the University of
Georgia. But he only has one (highly favorable) evaluation in the archives.
I suspect that there's an added reporting bias. Both Amy and Denny mostly
teach graduate students. I suspect that graduate students are less inclined
to fool with RateMyProfessor.
Having said this, there can be revealing information about teaching
style, grading, exam difficulties, and other things factoring into good and
bad teaching. Probably the most popular thing I've noted is that the
top-rated professors usually get responses about making the class "easy."
Now that can be taken two ways. It's a good thing to make difficult material
seem more easy but still grade on the basis of mastering the difficult
material. It is quite another thing to leave out the hard parts so students
really do not master the difficult parts of the course.
If nothing else, RateMyProfessor says a whole lot about the students we
teach. The first thing to note is how these college-level students often
spell worse than the high school drop outs. In English classes such bad
grammar may be intentional, but I've read enough term papers over the years
to know that dependence upon spell checkers in word processors has made
students worse in spelling on messages that they do not have the computer
check for spelling. They're definitely Fonex spellers.
Many students, certainly not all, tend to prefer easy graders. For
example, currently the instructor ranked Number 1 in the United States by
RateMyProfessor appears to be an easy grader, although comments by only a
few individual students should be taken with a grain of salt. Here's Page
One (five out of 92 evaluations) of 19 pages of summary evaluations at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=23294
11/13/07 |
HIST101 |
5 |
5 |
5 |
5 |
|
easiest teacher EVER |
11/12/07 |
abcdACCT |
1 |
1 |
1 |
1 |
|
good professor |
11/11/07 |
HistGacct |
3 |
2 |
4 |
1 |
|
Good teacher. Was enjoyable to heat teach. Reccomend class.
Made my softmore year. |
11/10/07 |
HISTACCT |
5 |
5 |
5 |
5 |
|
Very genious. |
11/8/07 |
histSECT |
3 |
5 |
4 |
4 |
|
amazing. by far the greatest teacher. I had him for Culture
and the Holocust with Schiffman and Scott. He is a genius. love
him. |
Does it really improve ratings to not make students have presentations?
Although making a course easy is popular, is it a good thing to do? Here are
the Page 3 (five out of 55 evaluations) ratings of the instructor ranked
Number 2 in the United States:
12/21/05 |
Spanish 10
2 |
3 |
5 |
5 |
5 |
|
One of the best professors that I have ever had. Homework is
taken up on a daily base but, grading is not harsh. No
presentations. |
11/2/05 |
SPA 102 |
4 |
5 |
5 |
3 |
|
Wow, a great teacher. Totally does not call people out and
make them feel stupid in class, like a lot of spanish teachers.
The homework is super easy quiz grades that can be returned with
corrections for extra points. You have to take her for Spa
102!!!! You actually learn in this class but is fun too! |
10/27/05 |
Span 102 |
4 |
5 |
5 |
5 |
|
I love Senora Hanahan. She is one of the best teachers I
ever had. She is very clear and she is super nice. She will go
out of her way just to make sure that you understand. I Love
Her! I advise everyone to take her if you have a choice. She is
great!! |
9/14/05 |
SPA 201 |
4 |
5 |
5 |
5 |
|
I am absolutly not suprised that Senora Hanahan has smiley
faces on every rating. She is awesme and fun. |
8/25/05 |
SPA 102 |
4 |
5 |
5 |
5 |
|
I LOVE her! Absolutely wonderful! Goes far out of her way to
help you and remembers your needs always. She will call you at
home if you tell her you need help, and she will do everything
possible to keep you on track . I have no IDEA how she does it!
She really wants you to learn the language. She's pretty and fun
and absolutely wonderful! |
Students, however, are somewhat inconsistent about grading and exam
difficulties. For example, read the summary outcomes for the instructor
currently ranked as Number 8 in the United States ---
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
Note this is only one page out of ten pages of comments:
10/31/07 |
hpd110 |
5 |
3 |
2 |
4 |
|
she is pushing religion on us too much... she should be more
open minded. c-lots is always forcing her faith based lessons
down our throats. she makes me wanna puke. |
10/14/07 |
PysEd100 |
1 |
1 |
1 |
1 |
|
She is no good in my opinion. |
5/22/07 |
HPD110 |
5 |
5 |
5 |
5 |
|
Dr. Lottes is amazing! it is almost impossible to get lower
than an A in her class as long as you show up. her lectures are
very interesting and sometimes it's almost like going to
therapy. the tests and activities are easy and during the test
there are group sections so it'll help your test grades. she is
very outgoing and fun! so take her! |
12/7/06 |
HDP070 |
2 |
5 |
5 |
2 |
|
Grades the class really hard, don't take if you are not
already physically fit. Otherwise, she's an amazing teacher. You
can tell she really cares about her students. |
Read the rest of the comments at
http://www.ratemyprofessors.com/ShowRatings.jsp?tid=182825
It's possible to look up individual colleges and I looked up Bowling
Green State University which is your current home base David. There are
currently 1,322 instructors rated at Bowling Green. I then searched by the
Department of Accounting. There are currently ten instructors rated. The
highest rated professor (in terms of average evaluations) has the following
Page One evaluations:
4/9/07 |
mis200 |
4 |
5 |
5 |
1 |
i admit, i don't like the class (mis200) since i think it
has nothing to do with my major. but mr. rohrs isn't that hard,
and makes the class alright. |
4/5/07 |
mis200 |
3 |
4 |
4 |
1 |
Other prof's assign less work for this class, but his
assignments aren't difficult. Really nice guy, helpful if you
ask, pretty picky though. |
4/4/07 |
Acct102 |
2 |
5 |
5 |
2 |
Easy to understand, midwestern guy. Doesn't talk over your
head. |
12/14/06 |
mis200 |
4 |
5 |
5 |
2 |
Kind of a lot of work but if you do good on it you will def
do good...real cool guy |
12/10/06 |
BA150 |
4 |
5 |
5 |
4 |
Mr. Rohrs made BA 150 actually somewhat enjoyable. He is
very helpful and makes class as interesting as possible. He is
also very fair with grading. Highly Recommend. |
Your evaluations make me want to take your classes David. However, only
36 students have submitted evaluations. My guess is that over the same years
you've taught hundreds of students. But my guess is that we can extrapolate
that you make dull old accounting interesting and entertaining to students.
In answer to your question about dealing with student assignments to
multiple sections I have no answers. Many universities cycle the
pre-registration according to accumulated credits earned.. Hence seniors
sign up first and first year students get the leftovers. Standby signups are
handled according to timing much like airlines dole out standby tickets.
It is probably a bad idea to let instructors themselves add students to
the course. Popular teachers may be deluged with students seeking favors,
and some instructors do not know how to say no even though they may be
hurting other students by admitting too many students. Fortunately, classes
are generally limited by the number of seats available. Distance education
courses do not have that excuse for limiting class size.
PS
For research and sometimes entertainment, it's interesting to read the
instructor feedback comments concerning their own evaluations of
RateMyProfessor ---
http://www.mtvu.com/professors_strike_back/
You can also enter the word "humor" into the top search box and
investigate the broad range of humor and humorous styles of instructors.
Bob Jensen
Also see the following:
Bob Jensen's threads on the dysfunctional aspects of teacher evaluations
on grade inflation ---
http://faculty.trinity.edu/rjensen/Assess.htm#GradeInflation
RAND Corporation: Measuring Teacher Effectiveness ---
http://www.rand.org/education/projects/measuring-teacher-effectiveness.html
Explore the Measuring Teacher Effectiveness Fact Sheet Series Teachers
Matter: Understanding Teachers' Impact on Student Achievement
Research suggests that, among school-related factors, teachers matter
most when it comes to a student's academic performance. Nonschool
factors do influence student achievement, but effective teaching has the
potential to help level the playing field.
Multiple Choices: Options for Measuring Teaching Effectiveness
Teaching is a complex activity that should be measured with multiple
methods. Some examine teachers' practices directly, while others
emphasize student outcomes. Each method has trade-offs, and no single
method provides a complete picture of a teacher's effectiveness.
Tests and the Teacher: What Student Achievement Tests Do—and Don't—Tell
Us About Teacher Effectiveness
In addition to helping students learn reading and math, we also trust
teachers to teach students to think, reason, and work cooperatively with
one another. Students' scores on achievement tests tell us something—but
by no means everything—about how well teachers are meeting these
expectations.
Value-Added Modeling 101: Using Student Test Scores to Help Measure
Teaching Effectiveness
Value-added models, or VAMs, attempt to measure a teacher's impact on
student achievement apart from other factors, such as individual
ability, family environment, past schooling, and the influence of peers.
Value-added estimates enable relative judgments but are not absolute
indicators of effectiveness.
Student Growth Percentiles 101: Using Relative Ranks in Student Test
Scores to Help Measure Teaching Effectiveness
Student growth percentiles, or SGPs, provide a simple way of
comparing the improvement of one teacher's students at the end of the
year with the improvement of other students who started the year at the
same level.
The Ketz Solution to Grade Inflation
"Sue the University!" by: J. Edward Ketz, SmartPros, September 2009
---
http://accounting.smartpros.com/x67598.xml
She graduated from the college last April with a
bachelor of business administration degree, majoring in information
technology. Trina finished with a “solid” attendance record and a grade
point average of 2.7. She applied to every potential job placement available
through the college’s placement services, but to no avail. Because she
cannot get a job, she is suing the college for tuition costs ($70,000) plus
compensation for the stress due to her inability to land a job ($2,000).
News agencies that have reported on this event
uniformly point out that the case is meritless because colleges do not
promise a job to their students. Instead, they promise an education. These
reporters and pundits, however, miss the significance of the lawsuit. When
universities offer an education to their students, what are they really
offering and what do they deliver? And how can you tell whether the
university has actually provided an education to the student?
We used to say institutions of higher learning
supplied higher levels of knowledge; but with the knowledge explosion in the
last 100 years or so, nobody today comprehends much of the total human
knowledge that we collectively have. Besides, anybody can log on to the web
and presumably find knowledge. Whether the individual knows what to do with
it is another matter.
And Bill Gates is one example that it is possible
to gain knowledge without a college degree. Of course, one might quickly add
that for every success story such as Gates’, there are hundreds of
uneducated people who are unemployed or working for minimum wages.
For some time universities have been asserting that
an education is a process by which the university teaches students to think.
Academia teaches “critical thinking”, communication skills, global
awareness, and diversity training. Bypassing any thoughts about whether this
is what higher learning should be about, I want to focus on assessment. When
a student graduates, how does he or she (or parents) grasp whether the
mission has been accomplished? Did they receive value commensurate with the
costs?
Our society is quite utilitarian, and that
philosophy began to pervade universities when Congress democratized college
education after World War II with the GI bill. Education at universities was
once for the elite, but now it exists for the masses. By necessity,
universities have had to water down the content of courses because the
average person, by definition, is unable to accomplish what the elite can
do.
The irony, as many have stated, is grade
inflation for the masses, especially when contrasted with grades that
existed a century ago. The interesting point is that universities do not
have the will to change this aspect of the system. They prefer to have
satisfied “customers” and parents and governments—and the tuition dollars.
One simple scheme to improve the grading system
is to require faculty to rank order the students and resolve ties with the
median of the tied scores. Any faculty member who assigns all A’s ranks all
of the students in the 50th percentile. A faculty member who gives 60% A’s
and 40% B’s assigns the first group to the 70th percentile and members of
the latter group to the 20th percentile. But, this improvement will never be
implemented because universities don’t really want to fix this problem.
The utilitarian worldview raises its head at
various points, and one concerns the value of education. While many analysts
dismiss Thompson’s lawsuit because her college did not promise her a job, it
would prove interesting to take a poll of students and parents across the
land. My hunch is that enough people would side with Trina to make
university administrators uncomfortable.
After all, how can you tell whether somebody has
achieved a sufficiently proficient level of critical thinking? How can you
assess one’s ability to communicate or his or her ability to grasp global
issues or be sensitive to diversity? Of course, we professors claim to have
the professional judgment to answer these questions, but what we do is a
black box to outsiders, if not to ourselves.
In a lot of ways trying to answer these questions
isn’t much different from debating the number of angels that can dance on a
pinhead. I hypothesize that most Americans would escape the subjectivity of
these issues by saying the acid test for these concerns is the ability to
get a job. Perhaps not immediately, as a liberal arts education is often
deemed a useful foundation for a professional education, such as law, but
eventually one needs some sort of employment to say that the education has
succeeded.
Accounting education is no different. On the one
hand, we would like graduates to demonstrate critical thinking, ethical
decision making, and be aware of international business issues. On the other
hand, graduates need skills for the marketplace. And not just skills to
obtain a job, but skills and attitudes and a work ethic to advance and
contribute to the firm and to society.
As I reflect on Trina Thompson’s lawsuit, I wonder
how many more students will sue their alma maters. And, if a judge allows
the suit to proceed, I wonder whether jury members will sympathize with the
colleges or with the unemployed graduates. There is more at stake here than
merely the discontent of one unemployed former student.
Jensen Comment
Below is my August 17, 2009 on the Trina Thompson lawsuit. ABC News asserted
that Monroe College in overzealous recruiting practices made "promises" beyond
what is normal more traditional colleges and universities. If she wins this
lawsuit it need not make most other learning institutions worry.
A New York City woman who says she can't find a job is
suing the college where she earned a bachelor's degree. Trina Thompson filed a
lawsuit last week against Monroe College in Bronx Supreme Court. The 27-year-old
is seeking the $70,000 she spent on tuition. Thompson says she's been unable to
find gainful employment since she received her information technology degree in
April.
"Jobless NYC woman sues college for $70K in tuition," Yahoo News, August
2, 2009 ---
http://news.yahoo.com/s/ap/20090803/ap_on_fe_st/us_odd_jobless_grad_sues
Jensen Comment
ABC News added some added some revelations about deceptive promises being made
to student prospects and tuition rip offs. There may be circumstances that make
this lawsuit different from most situations for college graduates in general.
Question
What topic dominates instructor evaluations on RateMyProfessors.com (or RATE for
short)?
"RateMyProfessors — or His Shoes Are Dirty," by Terry Caesar, Inside
Higher Ed, July 28, 2006 ---
http://www.insidehighered.com/views/2006/07/28/caesar
But the trouble begins here. Like those guests,
students turn out to be candid about the same thing. Rather than sex, it’s
grades. Over and over again, RATE comments cut right to the chase: how easy
does the professor grade? If easy, all things are forgiven, including a dull
classroom presence. If hard, few things are forgiven, especially not a dull
classroom presence. Of course we knew students are obsessed with grades. Yet
until RATE could we have known how utterly, unremittingly, remorselessly?
And now the obsession is free to roam and cavort,
without the constraints of the class-by-class student evaluation forms, with
their desiderata about the course being “organized” or the instructor having
“knowledge of subject matter.” These things still count. RATE students
regularly register them. But nothing counts like grades. Compared to RATE,
the familiar old student evaluation forms suddenly look like searching
inquiries into the very nature of formal education, which consists of many
other things than the evaluative dispositions of the professor teaching it.
What other things? For example, whether or not the
course is required. Even the most rudimentary of student evaluation forms
calls for this information. Not RATE. Much of the reason a student is free
to go straight for the professorial jugular — and notwithstanding all the
praise, the site is a splatfest — is because course content can be merrily
cast aside. The raw, visceral encounter of student with professor, as
mediated through the grade, emerges as virtually the sole item of interest.
Of course one could reply: so what? The site
elicits nothing else. That’s why it’s called, “rate my professors,” and not
“rate my course.” In effect, RATE takes advantage of the slippage always
implicit in traditional student evaluations, which both are and are not
evaluations of the professor rather than the course. To be precise, they are
evaluations of the professor in terms of a particular course. This
particularity, on the other hand, is precisely what is missing at the RATE
site, where whether or not a professor is being judged by majors — a crucial
factor for departmental and college-wide tenure or promotion committees who
are processing an individual’s student evaluations — is not stipulated.
Granted, a student might bring up being a major. A
student might bring anything up. This is why RATE disappoints, though,
because there’s no framework, not even that of a specific course, to
restrain or guide student comments. “Sarcastic” could well be a different
thing in an upper-division than in a lower-division course. But in the
personalistic RATE idiom, it’s always a character flaw. Indeed, the purest
RATE comments are all about character. Just as the course is without
content, the professor is without performative ability. Whether he’s a “nice
guy” or she “plays favorites,” it’s as if the student has met the professor
a few times at a party, rather than as a member of his or her class for a
semester.
RATE comments are particularly striking if we
compare those made by the professor’s colleagues as a result of classroom
observations. Many departments have evolved extremely detailed checksheets.
I have before me one that divides the observation into four categories,
including Personal Characteristics (10 items), Interpersonal Relationships
(8), Subject Application/Knowledge (8), and Conducting Instruction (36). Why
so many in the last category? Because performance matters — which is just
what we tell students about examinations: each aims to test not so much an
individual’s knowledge as a particular performance of that knowledge.
Of course, some items on the checksheet are of
dubious value, e.g. “uses a variety of cognitive levels when asking
questions.” So it goes in the effort to itemize successful teaching, an
attempt lauded by proponents of student evaluations or lamented by critics.
The genius of RATE is to bypass the attempt entirely, most notoriously with
its “Hotness Total.” Successful teaching? You may be able to improve
“helpfulness” or “clarity.” But you can’t very well improve “hotness.”
Whether or not you are a successful teacher is not safely distant at RATE
from whether or not you are “hot.”
Perhaps it never was. In calling for a temperature
check, RATE may merely be directly addressing a question — call it the
charisma of an individual professor — that traditional student evaluations
avoid. If so, though, they avoid it with good reason: charisma can’t be
routinized. When it is, it becomes banal, which is one reason why the
critical comments are far livelier than the celebratory ones. RATE winds up
testifying to one truism about teaching: It’s a lot easier to say what good
teaching isn’t than to say what it is. Why? One reason is, because it’s a
lot easier for students who care only about teachers and not about teaching
to say so.
Finally, what about these RATE students? How many
semester hours have they completed? How many classes did they miss? It is
with good reason (we discover) that traditional student evaluation forms are
careful to ask something about each student. Not only is it important for
the administrative processing of each form. Such questions, even at a
minimal level, concede the significance in any evaluation of the evaluating
subject. Without some attention to this, the person under consideration is
reduced to the status of an object — which is, precisely, what the RATE
professor becomes, time after time. Students on RATE provide no information
at all about themselves, not even initials or geographical locations, as
given by many of the people who rate books and movies on amazon.com or who
give comments on columns and articles on this Web site.
In fact, students at RATE don’t even have to be
students! I know of one professor who was so angered at a comment made by
one of her students that she took out a fake account, wrote a more favorable
comment about herself, and then added more praise to the comments about two
of her colleagues. How many other professors do this? There’s no telling —
just as there’s no telling about local uses of the site by campus
committees. Of course this is ultimately the point about RATE: Even the
student who writes in the most personal comments (e.g. “hates deodorant") is
completely safe from local retribution — never mind accountability — because
the medium is so completely anonymous.
Thus, the blunt energies of RATE emerge as cutting
edge for higher education in the 21st century. In this respect, the degree
of accuracy concerning any one individual comment about any one professor is
beside the point. The point is instead the medium itself and the nature of
the judgements it makes possible. Those on display at RATE are immediate
because the virtual medium makes them possible, and anonymous because the
same medium requires no identity markers for an individual. Moreover, the
sheer aggregation of the site itself — including anybody from anywhere in
the country — emerges as much more decisive than what can or cannot be said
on it. I suppose this is equivalent to shrugging, whatever we think of RATE,
we now have to live with it.
I think again of the very first student evaluation
I received at a T.A. The result? I no longer remember. Probably not quite as
bad as I feared, although certainly not as good as I hoped. The only thing I
remember is one comment. It was made, I was pretty sure, by a student who
sat right in the front row, often put her head down on the desk (the class
was at 8 a.m.) and never said a word all semester. She wrote: “his shoes are
dirty.” This shocked me. What about all the time I had spent, reading,
preparing, correcting? What about how I tried to make available the best
interpretations of the stories required? My attempts to keep discussions
organized, or just to have discussions, rather than lectures?
All irrelevant, at least for one student? It seemed
so. Worse, I had to admit the student was probably right — that old pair of
brown wingtips I loved was visibly becoming frayed and I hadn’t kept them
shined. Of course I could object: Should the state of a professor’s shoes
really constitute a legitimate student concern? Come to this, can’t you be a
successful teacher if your shoes are dirty? In today’s idiom, might this not
even strike at least some students all by itself as being, well, “hot"? In
any case, I’ve never forgotten this comment. Sometimes it represents to me
the only thing I’ve ever learned from reading my student evaluations. I took
it very personally once and I cherish it personally still.
Had it appeared on RATE, however, the comment would
feel very different. A RATE[D] professor is likely to feel like a contestant
on “American Idol,” standing there smiling while the results from the
viewing audience are totaled. What do any of them learn? Nothing, except
that everything from the peculiarities of their personalities to, ah, the
shine of their shoes, counts. But of course as professors we knew this
already. Didn’t we? Of course it might always be good to learn it all over
again. But not at a site where nobody’s particular class has any weight; not
in a medium in which everybody’s words float free; and not from students
whose comments guarantee nothing except their own anonymity. I’ll bet some
of them even wear dirty shoes.
July 28, 2006 reply from Alexander Robin A
[alexande.robi@UWLAX.EDU]
Two quotes from a couple of Bob Jensen's recent
posts:
"Of course we knew students are obsessed with
grades." (from the RateMyProfessors thread)
"The problem is that universities have explicit
or implicit rankings of "journal quality" that is largely dictated by
research faculty in those universities. These rankings are crucial to
promotion, tenure, and performance evaluation decisions." (from the TAR
thread)
These two issues are related. First, students are
obsessed with grades because universities, employers and just about everyone
else involved are obsessed with grades. One can also say that faculty are
obsessed with publications because so are those who decide their fates. In
these two areas of academia, the measurement has become more important than
the thing it was supposed to measure.
For the student, ideally the learning is the most
important outcome of a class and the grade is supposed to reflect how
successful the learning was. But the learning does not directly and tangibly
affect the student - the grade does. In my teaching experience students,
administrators and employers saw the grade as being the key outcome of a
class, not the learning.
Research publication is supposed to result from a
desire to communicate the results of research activity that the researcher
is very interested in. But, especially in business schools, this has been
turned on its head and the publication is most important and the research is
secondary - it's just a means to the publication, which is necessary for
tenure, etc.
It's really a pathetic situation in which the
ideals of learning and discovery are largely perverted. Had I fully
understood the magnitude of the problem, I would have never gone for a PhD
or gotten into teaching. As to what to do about it, I really don't know. The
problems are so deeply entrenched in academic culture. Finally I just gave
up and retired early hoping to do something useful for the rest of my
productive life.
Robin Alexander
Bob Jensen's threads on teaching evaluations are at
http://faculty.trinity.edu/rjensen/assess.htm#TeachingStyle
Bob Jensen's threads on teaching evaluations and learning styles are at
http://faculty.trinity.edu/rjensen/assess.htm#LearningStyles
Professor Socrates' Teaching Evaluations: He's a Drag
"Hemlock Available in the Faculty Lounge advertisement Article tools," by
Thomas Cushman, The Chronicle of Higher Education, March 16, 2007 ---
http://chronicle.com/temp/reprint.php? id=6fnxs4gx7j6qr4v7qn567y5hb52ywb33
Teaching evaluations have become a permanent fixture in the academic
environment. These instruments, through which students express their
true feelings about classes and profes-sors, can make or break an
instructor. What would students say if they had Socrates as a
professor? This class on
philosophy was really good, Professor Socrates is sooooo smart,
I want to be just like him when I graduate (except not so short). I
was amazed at how he could take just about any argument and prove it
wrong.
I would advise him, though, that he doesn't
know everything, and one time he even said in class that the wise
man is someone who knows that he knows little (Prof. Socrates, how
about that sexist language!?). I don't think he even realizes at
times that he contradicts himself. But I see that he is just eager
to share his vast knowledge with us, so I really think it is more a
sin of enthusiasm than anything else.
I liked most of the meetings, except when
Thrasymachus came. He was completely arrogant, and I really resented
his male rage and his point of view. I guess I kind of liked him,
though, because he stood up to Prof. Socrates, but I think he is
against peace and justice and has no place in the modern university.
Also, the course could use more women
(hint: Prof. Socrates, maybe next time you could have your wife
Xanthippe come in and we can ask questions about your home life!
Does she resent the fact that you spend so much time with your
students?). All in all, though, I highly recommend both the course
and the instructor.
Socrates is a real drag, I don't
know how in hell he ever got tenure. He makes students feel bad by
criticizing them all the time. He pretends like he's teaching them,
but he's really ramming his ideas down student's throtes. He's
always taking over the conversation and hardly lets anyone get a
word in.
He's sooo arrogant. One time in class this
guy comes in with some real good perspectives and Socrates just kept
shooting him down. Anything the guy said Socrates just thought he
was better than him.
He always keeps talking about these figures
in a cave, like they really have anything to do with the real
world. Give me a break! I spend serious money for my education and I
need something I can use in the real world, not some b.s. about
shadows and imaginary trolls who live in caves.
He also talks a lot about things we haven't
read for class and expects us to read all the readings on the
syllabus even if we don't discuss them in class and that really bugs
me. Students' only have so much time and I didn't pay him to torture
me with all that extra crap.
If you want to get anxious and depressed,
take his course. Otherwise, steer clear of him! (Oh yeah, his
grading is really subjective, he doesn't give any formal exams or
papers so its hard to know where you stand in the class and when you
try to talk to him about grades he just gets all agitated and
changes the topic.)
For someone who is always
challenging conventional wisdom (if I heard that term one more time
I was going to die), Professor Socrates' ideal republic is pretty
darn static. I mean there is absolutely no room to move there in
terms of intellectual development and social change.
Also, I was taking this course on queer
theory and one of the central concepts was "phallocentricism" and I
was actually glad to have taken Socrates because he is a living,
breathing phallocentrist!
Also, I believe this Republic that Prof.
Socrates wants to design — as if anyone really wants to let this
dreadful little man design an entire city — is nothing but a plan
for a hegemonic, masculinist empire that will dominate all of Greece
and enforce its own values and beliefs on the diverse communities of
our multicultural society.
I was warned about this man by my adviser
in women's studies. I don't see that anything other than white male
patriarchy can explain his omnipresence in the agora and it
certainly is evident that he contributes nothing to a multicultural
learning environment. In fact, his whole search for the Truth is
evidence of his denial of the virtual infinitude of epistemic
realities (that term wasn't from queer theory, but from French lit,
but it was amazing to see how applicable it was to queer theory).
One thing in his defense is that he was
much more positive toward gay and lesbian people. Actually, there
was this one guy in class, Phaedroh or something like that, who
Socrates was always looking at and one day they both didn't come to
class and they disappeared for the whole day. I'm quite sure that
something is going on there and that the professor is abusing his
power over this student.
I learned a lot in this class, a lot
of things I never knew before. From what I heard from other
students, Professor Socrates is kind of weird, and at first I agreed
with them, but then I figured out what he was up to. He showed us
that the answers to some really important questions already are in
our minds.
I really like how he says that he is not so
much a teacher, but a facilitator. That works for me because I
really dislike the way most professors just read their lectures and
have us write them all down and just regurgitate them back on tests
and papers. We need more professors like Professor Socrates who are
willing to challenge students by presenting materials in new and
exciting ways.
I actually came out of this class with more
questions than answers, which bothered me and made me uncomfortable
in the beginning, but Professor Socrates made me realize that that's
what learning is all about. I think it is the only class I ever took
which made me feel like a different person afterward. I would highly
recommend this class to students who want to try a different way of
learning.
I don't know why all the people are
so pissed at Professor Socrates! They say he's corrupting us, but
it's really them that are corrupt. I know some people resent his
aggressive style, but that's part of the dialectic. Kudos to you,
Professor Socrates, you've really changed my way of thinking! Socs
rocks!!
My first thought about this class
was: this guy is really ugly. Then I thought, well, he's just
a little hard on the eyes. Finally, I came to see that he was kind
of cute. Before I used to judge everyone based on first impressions,
but I learned that their outward appearances can be seen in
different ways through different lenses.
I learned a lot in this class, especially
about justice. I always thought that justice was just punishing
people for doing things against the law and stuff. I was really
blown away by the idea that justice means doing people no harm (and
thanks to Prof. Socrates, I now know that the people you think are
your enemies might be your friends and vice versa, I applied that to
the people in my dorm and he was absolutely right).
An excellent class over all. One thing I
could suggest is that he take a little more care about his personal
appearance, because as we all know, first impressions are lasting
impressions.
Socrates is bias and prejudice and a
racist and a sexist and a homophobe. He stole his ideas from the
African people and won't even talk to them now. Someone said that
maybe he was part African, but there is noooooo way.
Thomas Cushman is a professor of sociology at Wellesley
College. |
Grade inflation begins before students attend college
When are all the millions of A grades of applicants really A+ grades for
the very top students?
In the cat-and-mouse maneuvering over admission to
prestigious colleges and universities, thousands of high schools have simply
stopped providing that information, concluding it could harm the chances of
their very good, but not best, students. Canny college officials, in turn, have
found a tactical way to respond. Using broad data that high schools often
provide, like a distribution of grade averages for an entire senior class, they
essentially recreate an applicant's class rank. The process has left them
exasperated. "If we're looking at your son or daughter and you want us to know
that they are among the best in their school, without a rank we don't
necessarily know that," said Jim Bock, dean of admissions and financial aid at
Swarthmore College.
Alan Finder, "Schools Avoid Class Ranking, Vexing Colleges," The New York
Times, March 5, 2006 ---
http://www.nytimes.com/2006/03/05/education/05rank.html
Why grades are worse predictors of academic success than standardized
tests
Several weeks into his first year of teaching math
at the High School of Arts and Technology in Manhattan, Austin Lampros received
a copy of the school’s grading policy. He took particular note of the
stipulation that a student who attended class even once during a semester, who
did absolutely nothing else, was to be given 45 points on the 100-point scale,
just 20 short of a passing mark.
Samuel G. Freedman, "A Teacher Grows Disillusioned After a ‘Fail’ Becomes a
‘Pass’," The New York Times, August 1, 2007 ---
http://www.nytimes.com/2007/08/01/education/01education.html
That student, Indira Fernandez, had missed dozens
of class sessions and failed to turn in numerous homework assignments,
according to Mr. Lampros’s meticulous records, which he provided to The New
York Times. She had not even shown up to take the final exam. She did,
however, attend the senior prom.
Through the intercession of Ms. Geiger, Miss
Fernandez was permitted to retake the final after receiving two days of
personal tutoring from another math teacher. Even though her score of 66
still left her with a failing grade for the course as a whole by Mr.
Lampros’s calculations, Ms. Geiger gave the student a passing mark, which
allowed her to graduate.
Continued in article
Grades are even worse than tests as
predictors of success
"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside
Higher Ed, July 27, 2007 ---
http://www.insidehighered.com/views/2007/07/27/sedlacek
Grades and test scores have worked well as the
prime criteria to evaluate applicants for admission, haven’t they? No!
You’ve probably heard people say that over and over again, and figured that
if the admissions experts believe it, you shouldn’t question them. But that
long held conventional wisdom just isn’t true. Whatever value tests and
grades have had in the past has been severely diminished. There are many
reasons for this conclusion, including greater diversity among applicants by
race, gender, sexual orientation and other dimensions that interact with
career interests. Predicting success with so much variety among applicants
with grades and test scores asks too much of those previous stalwarts of
selection. They were never intended to carry such a heavy expectation and
they just can’t do the job anymore, even if they once did. Another reason is
purely statistical. We have had about 100 years to figure out how to measure
verbal and quantitative skills better but we just can’t do it.
Grades
are even worse than tests as predictors of success.
The major reason is
grade inflation. Everyone
is getting higher grades these days, including those in high
school, college, graduate, and professional school. Students
are bunching up at the top of the grade distribution and we
can’t distinguish among them in selecting who would make the
best student at the next level.
We need a fresh approach. It is not good enough to feel
constrained by the limitations of our current ways of
conceiving of tests and grades. Instead of asking; “How can
we make the SAT and other such tests better?” or “How can we
adjust grades to make them better predictors of success?” we
need to ask; “What kinds of measures will meet our needs now
and in the future?” We do not need to ignore our current
tests and grades, we need to add some new measures that
expand the potential we can derive from assessment.
We appear to
have forgotten why tests were created in the first place.
While they were always considered to be useful in evaluating
candidates, they were also considered to be more equitable
than using prior grades because of the variation in quality
among high schools.
Test results
should be useful to educators — whether involved in
academics or student services — by providing the basis to
help students learn better and to analyze their needs. As
currently designed, tests do not accomplish these
objectives. How many of you have ever heard a colleague say
“I can better educate my students because I know their SAT
scores”? We need some things from our tests that currently
we are not getting. We need tests that are fair to all and
provide a good assessment of the developmental and learning
needs of students, while being useful in selecting
outstanding applicants. Our current tests don’t do that.
The rallying
cry of “all for one and one for all” is one that is used
often in developing what are thought of as fair and
equitable measures. Commonly, the interpretation of how to
handle diversity is to hone and fine-tune tests so they are
work equally well for everyone (or at least to try to do
that). However, if different groups have different
experiences and varied ways of presenting their attributes
and abilities, it is unlikely that one could develop a
single measure, scale, test item etc. that could yield
equally valid scores for all. If we concentrate on results
rather than intentions, we could conclude that it is
important to do an equally good job of selection for each
group, not that we need to use the same measures for all to
accomplish that goal. Equality of results, not process is
most important.
Therefore,
we should seek to retain the variance due to culture, race,
gender, and other aspects of non-traditionality that may
exist across diverse groups in our measures, rather than
attempt to eliminate it. I define non-traditional persons as
those with cultural experiences different from those of
white middle-class males of European descent; those with
less power to control their lives; and those who experience
discrimination in the United States.
While
the term “noncognitive” appears to be precise and
“scientific” sounding, it has been used to describe a wide
variety of attributes. Mostly it has been defined as
something other than grades and test scores, including
activities, school honors, personal statements, student
involvement etc. In many cases those espousing noncognitive
variables have confused a method (e.g. letters of
recommendation) with what variable is being measured. One
can look for many different things in a letter.
Robert Sternberg’s system of
viewing intelligence provides a model, but is important to
know what sorts of abilities are being assessed and that
those attributes are not just proxies for verbal and
quantitative test scores. Noncognitive variables appear to
be in Sternberg’s experiential and contextual domains, while
standardized tests tend to reflect the componential domain.
Noncognitive variables are useful for all students, they are
particularly critical for non-traditional students, since
standardized tests and prior grades may provide only a
limited view of their potential.
I and
my colleagues and students have developed a system of
noncognitive variables that has worked well in many
situations. The eight variables in the system are
self-concept, realistic self-appraisal, handling the system
(racism), long range goals, strong support person,
community, leadership, and nontraditional knowledge.
Measures of these dimensions are available at no cost in a
variety of articles and in a book,
Beyond the Big Test.
This
Web site has previously featured how
Oregon State University has used a
version of this system very successfully in increasing their
diversity and student success. Aside from increased
retention of students, better referrals for student services
have been experienced at Oregon State. The system has also
been employed in selecting Gates Millennium Scholars. This
program, funded by the Bill & Melinda Gates Foundation,
provides full scholarships to undergraduate and graduate
students of color from low-income families. The SAT scores
of those not selected for scholarships were somewhat higher
than those selected. To date this program has provided
scholarships to more than 10,000 students attending more
than 1,300 different colleges and universities. Their
college GPAs are about 3.25, with five year retention rates
of 87.5 percent and five year graduation rates of 77.5
percent, while attending some of the most selective colleges
in the country. About two thirds are majoring in science and
engineering.
The
Washington State Achievers program
has also employed the noncognitive variable system discussed
above in identifying students from certain high schools that
have received assistance from an intensive school reform
program also funded by the Bill & Melinda Gates Foundation.
More than 40 percent of the students in this program are
white, and overall the students in the program are enrolling
in colleges and universities in the state and are doing
well. The program provides high school and college mentors
for students. The
College Success Foundation is
introducing a similar program in Washington, D.C., using the
noncognitive variables my colleagues and I have developed.
Recent
articles in this publication have discussed programs at the
Educational Testing Service for
graduate students and
Tufts University for
undergraduates that have incorporated noncognitive
variables. While I applaud the efforts for reasons I have
discussed here, there are questions I would ask of each
program. What variables are you assessing in the program? Do
the variables reflect diversity conceptually? What evidence
do you have that the variables assessed correlate with
student success? Are the evaluators of the applications
trained to understand how individuals from varied
backgrounds may present their attributes differently? Have
the programs used the research available on noncognitive
variables in developing their systems? How well are the
individuals selected doing in school compared to those
rejected or those selected using another system? What are
the costs to the applicants? If there are increased costs to
applicants, why are they not covered by ETS or Tufts?
Until these
and related questions are answered these two programs seem
like interesting ideas worth watching. In the meantime we
can learn from the programs described above that have been
successful in employing noncognitive variables. It is
important for educators to resist half measures and to
confront fully the many flaws of the traditional ways higher
education has evaluated applicants.
CUNY to Raise SAT Requirements for Admission
The City University of New York is beginning a drive to
raise admissions requirements at its senior colleges, its first broad revision
since its trustees voted to bar students needing remedial instruction from its
bachelor’s degree programs nine years ago. In 2008, freshmen will have to show
math SAT scores 20 to 30 points higher than they do now to enter the
university’s top-tier colleges — Baruch, Brooklyn, City, Hunter and Queens — and
its six other senior colleges.
Karen W. Arenson, "CUNY Plans to Raise Its Admissions Standards," The New
York Times, July 28, 2007 ---
http://www.nytimes.com/2007/07/28/education/28cuny.html
Grades are even worse than tests as
predictors of success
"The Wrong Traditions in Admissions," by William E. Sedlacek, Inside
Higher Ed, July 27, 2007 ---
http://www.insidehighered.com/views/2007/07/27/sedlacek
Grades and test scores have worked well as the
prime criteria to evaluate applicants for admission, haven’t they? No!
You’ve probably heard people say that over and over again, and figured that
if the admissions experts believe it, you shouldn’t question them. But that
long held conventional wisdom just isn’t true. Whatever value tests and
grades have had in the past has been severely diminished. There are many
reasons for this conclusion, including greater diversity among applicants by
race, gender, sexual orientation and other dimensions that interact with
career interests. Predicting success with so much variety among applicants
with grades and test scores asks too much of those previous stalwarts of
selection. They were never intended to carry such a heavy expectation and
they just can’t do the job anymore, even if they once did. Another reason is
purely statistical. We have had about 100 years to figure out how to measure
verbal and quantitative skills better but we just can’t do it.
Grades
are even worse than tests as predictors of success.
The major reason is
grade inflation. Everyone
is getting higher grades these days, including those in high
school, college, graduate, and professional school. Students
are bunching up at the top of the grade distribution and we
can’t distinguish among them in selecting who would make the
best student at the next level.
We need a fresh approach. It is not good enough to feel
constrained by the limitations of our current ways of
conceiving of tests and grades. Instead of asking; “How can
we make the SAT and other such tests better?” or “How can we
adjust grades to make them better predictors of success?” we
need to ask; “What kinds of measures will meet our needs now
and in the future?” We do not need to ignore our current
tests and grades, we need to add some new measures that
expand the potential we can derive from assessment.
We appear to
have forgotten why tests were created in the first place.
While they were always considered to be useful in evaluating
candidates, they were also considered to be more equitable
than using prior grades because of the variation in quality
among high schools.
Test results
should be useful to educators — whether involved in
academics or student services — by providing the basis to
help students learn better and to analyze their needs. As
currently designed, tests do not accomplish these
objectives. How many of you have ever heard a colleague say
“I can better educate my students because I know their SAT
scores”? We need some things from our tests that currently
we are not getting. We need tests that are fair to all and
provide a good assessment of the developmental and learning
needs of students, while being useful in selecting
outstanding applicants. Our current tests don’t do that.
The rallying
cry of “all for one and one for all” is one that is used
often in developing what are thought of as fair and
equitable measures. Commonly, the interpretation of how to
handle diversity is to hone and fine-tune tests so they are
work equally well for everyone (or at least to try to do
that). However, if different groups have different
experiences and varied ways of presenting their attributes
and abilities, it is unlikely that one could develop a
single measure, scale, test item etc. that could yield
equally valid scores for all. If we concentrate on results
rather than intentions, we could conclude that it is
important to do an equally good job of selection for each
group, not that we need to use the same measures for all to
accomplish that goal. Equality of results, not process is
most important.
Therefore,
we should seek to retain the variance due to culture, race,
gender, and other aspects of non-traditionality that may
exist across diverse groups in our measures, rather than
attempt to eliminate it. I define non-traditional persons as
those with cultural experiences different from those of
white middle-class males of European descent; those with
less power to control their lives; and those who experience
discrimination in the United States.
While
the term “noncognitive” appears to be precise and
“scientific” sounding, it has been used to describe a wide
variety of attributes. Mostly it has been defined as
something other than grades and test scores, including
activities, school honors, personal statements, student
involvement etc. In many cases those espousing noncognitive
variables have confused a method (e.g. letters of
recommendation) with what variable is being measured. One
can look for many different things in a letter.
Robert Sternberg’s system of
viewing intelligence provides a model, but is important to
know what sorts of abilities are being assessed and that
those attributes are not just proxies for verbal and
quantitative test scores. Noncognitive variables appear to
be in Sternberg’s experiential and contextual domains, while
standardized tests tend to reflect the componential domain.
Noncognitive variables are useful for all students, they are
particularly critical for non-traditional students, since
standardized tests and prior grades may provide only a
limited view of their potential.
I and
my colleagues and students have developed a system of
noncognitive variables that has worked well in many
situations. The eight variables in the system are
self-concept, realistic self-appraisal, handling the system
(racism), long range goals, strong support person,
community, leadership, and nontraditional knowledge.
Measures of these dimensions are available at no cost in a
variety of articles and in a book,
Beyond the Big Test.
This
Web site has previously featured how
Oregon State University has used a
version of this system very successfully in increasing their
diversity and student success. Aside from increased
retention of students, better referrals for student services
have been experienced at Oregon State. The system has also
been employed in selecting Gates Millennium Scholars. This
program, funded by the Bill & Melinda Gates Foundation,
provides full scholarships to undergraduate and graduate
students of color from low-income families. The SAT scores
of those not selected for scholarships were somewhat higher
than those selected. To date this program has provided
scholarships to more than 10,000 students attending more
than 1,300 different colleges and universities. Their
college GPAs are about 3.25, with five year retention rates
of 87.5 percent and five year graduation rates of 77.5
percent, while attending some of the most selective colleges
in the country. About two thirds are majoring in science and
engineering.
The
Washington State Achievers program
has also employed the noncognitive variable system discussed
above in identifying students from certain high schools that
have received assistance from an intensive school reform
program also funded by the Bill & Melinda Gates Foundation.
More than 40 percent of the students in this program are
white, and overall the students in the program are enrolling
in colleges and universities in the state and are doing
well. The program provides high school and college mentors
for students. The
College Success Foundation is
introducing a similar program in Washington, D.C., using the
noncognitive variables my colleagues and I have developed.
Recent
articles in this publication have discussed programs at the
Educational Testing Service for
graduate students and
Tufts University for
undergraduates that have incorporated noncognitive
variables. While I applaud the efforts for reasons I have
discussed here, there are questions I would ask of each
program. What variables are you assessing in the program? Do
the variables reflect diversity conceptually? What evidence
do you have that the variables assessed correlate with
student success? Are the evaluators of the applications
trained to understand how individuals from varied
backgrounds may present their attributes differently? Have
the programs used the research available on noncognitive
variables in developing their systems? How well are the
individuals selected doing in school compared to those
rejected or those selected using another system? What are
the costs to the applicants? If there are increased costs to
applicants, why are they not covered by ETS or Tufts?
Until these
and related questions are answered these two programs seem
like interesting ideas worth watching. In the meantime we
can learn from the programs described above that have been
successful in employing noncognitive variables. It is
important for educators to resist half measures and to
confront fully the many flaws of the traditional ways higher
education has evaluated applicants.
Why grades are worse predictors of academic
success than standardized tests
Several weeks into his first year of teaching math
at the High School of Arts and Technology in Manhattan, Austin Lampros received
a copy of the school’s grading policy. He took particular note of the
stipulation that a student who attended class even once during a semester, who
did absolutely nothing else, was to be given 45 points on the 100-point scale,
just 20 short of a passing mark.
Samuel G. Freedman, "A Teacher Grows Disillusioned After a ‘Fail’ Becomes a
‘Pass’," The New York Times, August 1, 2007 ---
http://www.nytimes.com/2007/08/01/education/01education.html
That student, Indira Fernandez, had missed dozens
of class sessions and failed to turn in numerous homework assignments,
according to Mr. Lampros’s meticulous records, which he provided to The New
York Times. She had not even shown up to take the final exam. She did,
however, attend the senior prom.
Through the intercession of Ms. Geiger, Miss
Fernandez was permitted to retake the final after receiving two days of
personal tutoring from another math teacher. Even though her score of 66
still left her with a failing grade for the course as a whole by Mr.
Lampros’s calculations, Ms. Geiger gave the student a passing mark, which
allowed her to graduate.
Continued in article
CUNY to Raise SAT Requirements for Admission
The City University of New York is beginning a drive to
raise admissions requirements at its senior colleges, its first broad revision
since its trustees voted to bar students needing remedial instruction from its
bachelor’s degree programs nine years ago. In 2008, freshmen will have to show
math SAT scores 20 to 30 points higher than they do now to enter the
university’s top-tier colleges — Baruch, Brooklyn, City, Hunter and Queens — and
its six other senior colleges.
Karen W. Arenson, "CUNY Plans to Raise Its Admissions Standards," The New
York Times, July 28, 2007 ---
http://www.nytimes.com/2007/07/28/education/28cuny.html
Note the Stress on Grades (Point 4 Below)
"Playbook: Does Your School Make The Grade? Here are four things to consider
when applying to an undergrad business program" by Louis
Lavelle, with Geoff Gloeckler and Jane Porter, Business Week, March 19,
2007 ---
Click Here
COMPETITION IS FIERCE
1. Once considered a haven for less academically gifted
students, undergraduate business programs are raising their
standards. With more students beating a path to their doors, many
B-schools are boosting their admissions criteria and getting
fussier.
At schools with four-year programs, sat and act requirements have
gone up. The average sat score for freshmen admitted to the Indiana
University business program, where applications nearly doubled last
year, is now 1340—up from 1312 in 2005-2006 and a full 343 points
higher than the national average for test takers who intend to major
in business. At universities with two-year business programs,
especially those like the University of Iowa where more than 2,000
declared business majors are waiting to join a program designed for
1,300, gpa requirements in pre-business courses are rising, too.
For students, the higher bar requires a strategic rethink. Many
already take standardized tests multiple times to maximize scores.
Those with lower scores who are applying directly to four-year
business programs are beefing up their applications in other ways,
including taking part in extracurricular activities and
fund-raisers. Savvy applicants assess the likelihood of being
accepted at their first-choice schools and give more thought to less
selective "safety" schools.
Those applying to a four-year school with a two-year business
program are advised to contemplate what they'll do if they can't
find places as juniors. Can credits accumulated in the first two
years be transferred to another school? Can one stay put, declare
another major, and obtain a minor in business instead?
IT'S A NATIONAL GAME
2. Undergraduate business education used to be a local or
regional affair. That's changing. Today, many students attend
programs far from home.
Out-of-state schools may provide a broader array of programs than
those available in an applicant's home state. They include
leadership, entrepreneurship, and global business. A number of
schools have launched specialized programs that place students in
hard-to-crack industries that are located in the school's
backyard—such as sports marketing at the University of Oregon, home
state of Nike (NKE ) and Adidas, among others; energy commerce at
Texas Tech University; life sciences at Wharton; and both cinematic
arts and computer engineering at the University of Southern
California.
If the academic offerings aren't enough to get the intellectual
juices flowing, consider this: Out-of-state tuition at top public
universities can be a bargain. Attending a top private B-school like
Wharton can easily cost more than $30,000 a year, excluding room and
board and other living expenses. A highly ranked public school like
the No. 2 University of Virginia costs $25,945; No. 13 University of
Texas at Austin is $22,580; and No. 15 University of North Carolina,
$18,010.
Many of the public schools have programs that are roughly on par
with private institutions—in terms of class size, faculty-student
ratios, and other measures. Public schools can also be easier to get
into. The average sat score at Wharton is 1430—compared with 1366
for Virginia, 1335 at unc, and 1275 for Texas-Austin.
Sometimes out-of-state schools, public or private, are better at
finding grads decent jobs. If a school has established recruiting
relationships with specific industries, it may be worth a look—no
matter where it is. Are you an aspiring accountant? All of the Big
Four firms recruit at Texas-Austin. Aiming for Wall Street?
Recruiters for eight financial-services giants are among the 10 top
recruiters at New York University. For a would-be "master of the
universe" living in Oklahoma who is considering the University of
Oklahoma—where no big investment banks recruit—the message is clear:
change career goals, or start packing.
INTERNSHIPS MATTER
3. Internships are a valuable learning experience. Since
many employers use them as extended tryouts for full-time positions,
they are also an important pipeline to the most coveted jobs. So
scoring one ought to be near the top of every undergrad's agenda.
Yet not all programs provide the same access to internships. At No.
5 University of Michigan, 92% of undergrads who completed our survey
had internships, compared with less than 25% at No. 81 University of
Texas at Dallas. And not all internships are created equal. Co-op
programs at the University of Cincinnati, Northeastern University,
and Penn State allow students to graduate with up to two years of
work experience. Elsewhere, a three-month summer internship is the
norm.
Why the disparity? For one thing, location matters. To a casual
observer there wouldn't appear to be much to differentiate the
undergraduate B-school program at Fordham University from that of
the University of Denver. Both are private, four-year programs.
Tuition and enrollment are almost identical. And in last year's
ranking they came in at No. 48 and No. 49, respectively. But at
Denver, 57 companies recruited undergrads for internships. At New
York-based Fordham: 200. Emily Sheu transferred from No. 4 Emory
University to No. 34 (this year) Fordham, where she had internships
at Bloomberg and Merrill Lynch & Co. (MER ) For her, it was all
about location. "Atlanta," she points out, "is no Manhattan."
Students at three- and four-year programs are more likely to take
in-depth business courses early, making them more competitive
internship candidates. That's one reason why the University of
Michigan is phasing out its two-year program in favor of a
three-year model. Also, watch out for summer school. When schools
schedule classes in the summer before the junior year, having more
than one internship before graduation becomes near-impossible.
BEWARE THE GRADING CURVE
4. Are grades really such a big deal? The answer is a
resounding "yes," especially for those considering schools like
Michigan, Babson College, Oregon, or Pennsylvania, where grading
curves are a fact of business school life. Curves designed to
counter grade inflation by limiting the number of As in any given
class can make it difficult for even high performers to land
interviews with some recruiters.
USC's Marshall School of Business grades students on a curve, with
professors expected to hold the average gpa to 3.0 in core courses
and 3.3 in electives. Most students will get a 3.0, or a B, in each
of their 10 core business courses. A handful will earn a slightly
higher grade, and the same number will earn a lower grade.
For recruiters trolling B-school campuses, a gpa of under 3.5 will
in many cases consign a résumé to the bottom of the stack. At
Marshall, most large employers take the grade structure into
consideration, so students are rarely passed over for interviews.
But for smaller companies not familiar with the school, students are
at a disadvantage. David Freeman, a recent Marshall grad, estimates
that he missed out on a dozen interviews because he didn't meet the
grade requirements companies were looking for. "Without the curve,
my gpa would have been high enough to qualify for these interviews,"
he says.
While a grading curve probably isn't a deal-breaker for students
choosing among a handful of schools, it's certainly something that
should be taken into consideration. It's worth asking, for example,
if the policy is school-wide or if individual professors make their
own rules, and whether the curve covers core courses, electives, or
both.
Some students say that curves cause morale problems among students,
intensifying competition and making it harder to form meaningful
teams. Before enrolling in a program, prospective students should
find out what, if anything, the school is doing to counter those
problems.
|
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Grade inflation carries on after they get to college
Question
What was the average grade at Harvard in 1940?
Answer
In 1940, Harvard students had an unbelievable number of grades below a C
grade.
http://www.thecrimson.com/fmarchives/fm_03_01_2001/article4A.html
In 1940, more Harvard students had an average grade
of C- than any other GPA.
By 1986, that C- had ballooned to a B+. Today more
students receive As and Bs than ever before. And that’s about as far as
the consensus on grade inflation goes. Harry R. Lewis ‘68, dean of the
College, doesn’t even use the word without distancing himself from its
connotations. “I think that by far the dominant cause of grade ‘inflation’
at Harvard,” Lewis writes in an e-mail message, “is the application of
constant grading standards to the work of ever more talented students.”
Continued in article
The average grade in
leading private universities in 1992 was 3.11.
In 2002 it jumped to 3.26 on a four point scale.
Average undergraduate GPA for Alabama, California-Irvine, Carleton, Duke,
Florida, Georgia Tech, Hampden-Sydney, Harvard, Harvey Mudd, Nebraska-Kearney,
North Carolina-Chapel Hill, North Carolina-Greensboro, Northern Michigan,
Pomona, Princeton, Purdue, Texas, University of Washington, Utah, Wheaton
(Illinois), Winthrop, and Wisconsin-La Crosse. Note that inclusion in the
average does not imply that an institution has significant inflation. Data on
GPAs for each institution can be found at the bottom of this web page.
Institutions comprising this average were chosen strictly because they have
either published their data or have sent their data to the author on GPA
trends over the last 11 years.
GradeInflation.com --- http://gradeinflation.com/
Grade
inflation is emerging as the new leading scandal of higher education.
"The great grade-inflation lie Critics say that cushy grading is
producing ignorant college students and a bankrupt education system," by
Tom Scocca, The Boston Phoenix April 23 - 30, 1998 ---
http://www.bostonphoenix.com/archive/features/98/04/23/GRADE_INFLATION.html
October 18, 2005 message from Tracey Sutherland
[tracey@AAAHQ.ORG]
Re new faculty, teaching assistants, and teaching
support -- there is an interesting body of literature developing sparked by
a project begun in the early 1990's that's known as the Preparing Future
Faculty initiative -- funded by the Pew Trusts, NSF, and others in
conjunction with the Council of Graduate Schools and AAC&U -- more at
http://www.preparing-faculty.org/ .
Our thread also seems to be spinning around the
relationships between effort/grades and student course evaluations -- those
with that interest may find a "Pop Quiz" on assumptions about student course
ratings interesting:
http://www.ntlf.com/html/pi/9712/rwatch_1.htm .
Lower on the page at that link is also a brief summary
from Braskamp and Ory's "Assessing Faculty Work" a well-regarded book
including meta-analysis of research on student ratings that includes:
"Factors that are significantly and positively
associated with student ratings include the following: measures of
student achievement; alumni, peer and administrative ratings;
qualitative student comments; workload/difficulty level [ More difficult
courses, with a greater workload, receive slightly higher student
evaluations than do easier/lower workload courses]; energy and
enthusiasm of the teacher; status as a regular faculty member (as
opposed to a graduate assistant); faculty research productivity; student
motivation; student expected grade; and course level. The size and
practical significance of these relationships vary. For example, most
agree that there is little practical significance to the small positive
correlation between expected grade and student ratings, and between
faculty research productivity and student ratings. Similarly, research
shows a small and negative, but practically insignificant, relationship
between class size and student ratings."
"Factors generally found to be unrelated to
student ratings include faculty age and teaching experience,
instructor's gender, most faculty personality traits, student's age,
class level of student, student's GPA, student's personality, and
student's gender (with the exception of a slight preference for same-sex
instructors)."
Just grist for the mill!
Tracey
October 18, 2005 reply from Bob Jensen
One of
the problems with studying correlations between teaching evaluations and
grades is that the data are corrupted by grade inflation prior to the
collection of the data. Research studies have shown that, when teaching
evaluations started to become disclosed for performance and tenure
evaluation decisions, grade inflation commenced. Some of these studies,
such as those at Duke, Rutgers, and Montana are summarized below.
Thus
it may be difficult to conclude that grading and teaching evaluations are
not really correlated if the grade inflation took place before the data were
collected.
Over
the years I’ve seen teaching evaluations of many faculty. One thing that I
noticed about grading and teaching evaluations is that students will hammer
on instructors who they think has an “unfair” grading policy. Unfairness
can be defined in terms of teachers having “pets” and/or to “ambiguity” over
what it takes to get an A grade.
Ambiguity is a real problem! Many faculty think that ambiguity is important
when educating students about real world complexities. If course content
(e.g., cases and essay assignments) are ambiguous, it becomes more difficult
to avoid ambiguity in the grading process. I think some of the most serious
grade inflation took place in courses where instructors wanted to leave
ambiguity in course content and not get hammered on teaching evaluations due
to student frustrations over grades. This is especially common in graduate
schools where virtually all grades are either A or B grades and a C is
tantamount to an F.
Bob Jensen
Grade Inflation from High School to Graduate School
The Boston Globe reports seeing 30- 40 valedictorians per class
Extra credit for AP courses, parental lobbying and genuine hard work by the
most competitive students have combined to shatter any semblance of a Bell curve
An increasing number of Canada's business schools are literally selling MBAs to
generate revenue
[some] professors who say their colleagues are so afraid of bad student
evaluations that they are placating students with A's and B's.
From Jim Mahar's blog on November 24, 2006 ---
http://financeprofessorblog.blogspot.com/
Grade inflation from HS to Grad school
Three related stories that are not strictly
speaking finance but that should be of interest to most in academia.
In the first article, which is from the
Ottawa Citizen,
accelerated and executive MBA
programs come under attack for their supposed detrimantal impact on
learning in favor of revenue.
MBAs dumbed down for profit:
"An increasing number of Canada's business
schools are literally selling MBAs to generate revenue for their
ravenous budgets, according to veteran Concordia University finance
professor Alan Hochstein.
That apparent trend to make master of business administration
degrees easier to achieve at a premium cost is leading to
'sub-standard education for enormous fees,' the self-proclaimed
whistleblower said yesterday"
The second article is a widely reported AP article
that that centers on High School grade inflation. This high school issue
not only makes the admissions process more difficult but it also
influences the behavior of the students ("complaining works") and their
their grade expectations ("I have always gotten A's and therefore I
deserve on here").
A few look-ins from
Boston Globe's version:
"Extra credit for AP courses, parental
lobbying and genuine hard work by the most competitive students have
combined to shatter any semblance of a Bell curve, one in which 'A's
are reserved only for the very best. For example, of the 47,317
applications the University of California, Los Angeles, received for
this fall's freshman class, nearly 21,000 had GPAs of 4.0 or above."
or consider this:
""We're seeing 30, 40 valedictorians at a high
school because they don't want to create these distinctions between
students...."
and
"The average high school GPA increased from
2.68 to 2.94 between 1990 and 2000, according to a federal study."
This is not just a High School problem. In part
because of an agency cost problem (professors have incentives to grade
leniently even if it is to the detriment of students), the same issues
are regular discussions topics at all colleges as well. For instance
consider this story from the
Denver Post.
"A proposal to disclose class rank on student
transcripts has ignited a debate among University of Colorado
professors with starkly different views on whether grade inflation
is a problem....
[some] professors who say their colleagues are
so afraid of bad student evaluations that they are placating
students with A's and B's.
The few professors who grade honestly end
up with dismal scores on student evaluations, which affect their
salaries, professor Paul Levitt said. There is also the "endless
parade of malcontents" in their offices."
I would love to wrap this up with my own
solution, but obviously it is a tough problem to which there are no easy
solutions. That said, maybe it is time that I personally look back at my
past years' class grades to make sure I am not getting too soft. If we
all did that, we'd at least make a dent in the problem.
"Admissions boards face 'grade inflation'," by Justin Pope, Boston
Globe, November 18, 2006 ---
Click Here
That means he will have to find other ways to stand
out.
"It's extremely difficult," he said. "I spent all
summer writing my essay. We even hired a private tutor to make sure that
essay was the best it can be. But even with that, it's like I'm just kind of
leveling the playing field." Last year, he even considered transferring out
of his highly competitive public school, to some place where his grades
would look better.
Some call the phenomenon that Zalasky's fighting
"grade inflation" -- implying the boost is undeserved. Others say students
are truly earning their better marks. Regardless, it's a trend that's been
building for years and may only be accelerating: Many students are getting
very good grades. So many, in fact, it is getting harder and harder for
colleges to use grades as a measuring stick for applicants.
Extra credit for AP courses, parental lobbying and
genuine hard work by the most competitive students have combined to shatter
any semblance of a Bell curve, one in which 'A's are reserved only for the
very best. For example, of the 47,317 applications the University of
California, Los Angeles, received for this fall's freshman class, nearly
21,000 had GPAs of 4.0 or above.
That's also making it harder for the most selective
colleges -- who often call grades the single most important factor in
admissions -- to join in a growing movement to lessen the influence of
standardized tests.
"We're seeing 30, 40 valedictorians at a high
school because they don't want to create these distinctions between
students," said Jess Lord, dean of admission and financial aid at Haverford
College in Pennsylvania. "If we don't have enough information, there's a
chance we'll become more heavily reliant on test scores, and that's a real
negative to me."
Standardized tests have endured a heap of bad
publicity lately, with the SAT raising anger about its expanded length and
recent scoring problems. A number of schools have stopped requiring tests
scores, to much fanfare.
Continued in article
"Regents evaluate grade inflation: Class Ranking Debated," by
Jennifer Brown, Denver Post, November 2, 2006 ---
http://www.denverpost.com/headlines/ci_4588002
A proposal to disclose class rank on student
transcripts has ignited a debate among University of Colorado professors
with starkly different views on whether grade inflation is a problem.
On one side are faculty who attribute the climbing
grade-point averages at CU to the improved qualifications of entering
students in the past dozen years.
And on the other are professors who say their
colleagues are so afraid of bad student evaluations that they are placating
students with A's and B's.
One Boulder English professor said departments
should eliminate raises for faculty if the GPAs within the department rise
above a designated level.
The few professors who grade honestly end up with
dismal scores on student evaluations, which affect their salaries, professor
Paul Levitt said. There is also the "endless parade of malcontents" in their
offices.
"You have to be a masochist to proceed in that
way," said Levitt, one of 10 professors and business leaders who spoke to CU
regents about grade inflation Wednesday.
CU president Hank Brown suggested in August that
the university take on grade inflation by putting class rank or
grade-point-average percentiles on student transcripts.
Changing the transcripts would give potential
employers and graduate schools a clearer picture of student achievement,
Brown said.
At the Boulder campus, the average GPA rose from
2.87 in 1993 to 2.99 in 2004.
Regents are not likely to vote on the issue for a
couple of months.
Regent Tom Lucero wants to go beyond Brown's
suggestion and model CU's policy after Princeton University, where
administrators instituted a limit on A's two years ago.
"As long as we do something to address this issue,
I'll be happy nonetheless," he said.
But many professors believe academic rigor is a
faculty issue and regents should stay out of it.
"Top-down initiatives ... will likely breed not
higher expectations but a growing sense of cynicism," said a report from the
Boulder Faculty Assembly, which opposes Brown's proposals.
Still, the group wrote that even though grade
inflation has been "modest," the issue of academic rigor "deserves serious
ongoing scrutiny."
"More important than the consideration of grades is
the quality of education our students receive," said Boulder communication
professor Jerry Hauser.
CU graduates are getting jobs at top firms, landing
spots in elite graduate schools and having no trouble passing bar or
licensing exams, he said.
But faculty who believe grade inflation is a
serious problem said they welcome regent input.
Ignorant of Their Ignorance
My undergraduate
students can’t accurately predict their academic performance or skill levels.
Earlier in the semester, a writing assignment on study styles revealed that 14
percent of my undergraduate English composition students considered themselves
“overachievers.” Not one of those students was receiving an A in my course by
midterm. Fifty percent were receiving a C, another third was receiving B’s and
the remainder had earned failing grades by midterm. One student wrote,
“overachievers like myself began a long time ago.” She received a 70 percent on
her first paper and a low C at midterm.
Shari Wilson, "Ignorant of Their
Ignorance," Inside Higher Ed, November 16, 2006 ---
http://www.insidehighered.com/views/2006/11/16/wilson
Jensen comment
This does not bode well for self assessment.
What not to say to your professor/instructor
Top Ten No Sympathy Lines (Plus a Few Extra) ---
http://www.uwgb.edu/dutchs/nosymp.htm
Here are some samples:
Think of it as a TOP TEN list with a few bonus items:
- This Course Covered Too Much Material...
- The Expected Grade Just for Coming to Class is
a B
- I Disagreed With the Professor's Stand on ----
- Some Topics in Class Weren't on the Exams
- Do You Give Out a Study Guide?
- I Studied for Hours
- I Know The Material - I Just Don't Do Well on
Exams
- I Don't Have Time For All This (...but you
don't understand - I have a job.)
- Students Are Customers
- Do I Need to Know This?
- There Was Too Much Memorization
- This Course Wasn't Relevant
- Exams Don't Reflect Real Life
- I Paid Good Money for This Course and I
Deserve a Good Grade
- All I Want Is The Diploma
RateMyProfessors has some real-world examples of comments that professors
hated even worse ---
http://www.ratemyprofessors.com/Funniest.jsp
A few samples are shown below:
- You can't cheat in her class because no one knows the answers.
- His class was like milk, it was good for 2 weeks.
- Houston, we have a problem. Space cadet of a teacher, isn't quite
attached to earth.
- I would have been better off using the tuition money to heat my
apartment last winter.
- Three of my friends got A's in his class and my friends are dumb.
- Emotional scarring may fade away, but that big fat F on your transcript
won't.
- Evil computer science teaching robot who crushes humans for pleasure.
- Miserable professor - I wish I could sum him up without foul language.
- Instant amnesia walking into this class. I swear he breathes sleeping
gas.
- BORING! But I learned there are 137 tiles on the ceiling.
- Not only is the book a better teacher, it also has a better personality.
- Teaches well, invites questions and then insults you for 20 minutes.
- This teacher was a firecracker in a pond of slithery tadpoles.
- I learned how to hate a language I already know.
- Very good course, because I only went to one class.
- He will destroy you like an academic ninja.
- Bring a pillow.
- Your pillow will need a pillow.
- If I was tested on her family, I would have gotten an A.
- She hates you already.
Blackboard Will Soon Do Online Course Evaluations:
Should They Be Shared With the Administrators and/or the Public?
"Digital Assessments," by David Epstein, Inside Higher
Ed, June 20, 2006 ---
http://www.insidehighered.com/news/2006/06/20/blackboard
Assessment is quickly becoming the new black. It’s
one of the themes of the Secretary of Education’s
Commission on the Future of Higher Education. More
and more institutions, some prodded by accreditors, are looking for rigorous
ways — often online — to compile course data.
Now
Blackboard, a leading provider of course
management software, is making plans to enter the assessment field.
Blackboard already offers the capability to do
course evaluations, and for over a year-and-a-half the company has been
researching more comprehensive assessment practices.
The prospect of online evaluations and assessments,
for many faculty members, conjures images of
RateMyProfessors.com, the unrestricted
free-for-all where over 700,000 professors are rated — often to their dismay
— by anonymous reviewers. Blackboard — and some others are looking to enter
the evaluation field — are planning very different and more educationally
oriented models. Blackboard’s approach is more oriented on evaluating the
course than the professor.
Blackboard has generally enjoyed a good reputation
among faculty members, dating to its beginnings as a small startup. One of
the things that has endeared Blackboard to academics is the ability they
have had to customize the company’s products, and Blackboard, though it’s no
longer small, will seek to keep important controls in the hands of
institutions.
With institutions looking to do evaluations and
assessment online, Debra Humphreys, a spokeswoman with the Association of
American Colleges and Universities, said that Blackboard’s outcomes
assessment program “could make trends that are already underway easier for
schools.”
David Yaskin, vice president for product marketing
at Blackboard, said that a key component of Blackboard’s system — which is
in development — will likely be online portfolios that can be tracked in
accordance with learning outcomes that are determined by faculty members,
departments or institutions.
Yaskin said he’d like to see a system with
“established outcomes, and a student has to provide evidence” of progress
toward those outcomes, whether in the form of papers, photography
collections or other relevant measures. Yaskin added that faculty members
could create test questions as well, if they are so inclined, but that, for
Blackboard’s part, the “current plan is not to use centralized testing in
version 1.0, because higher ed is focused on higher orders of learning.”
One of the most powerful aspects of the program,
Yaskin said, will likely be its ability to compile data and slice it in
different ways. Institutions can create core sets of questions they want,
for a course evaluation, for example, but individual departments and
instructors can tailor other questions, and each level of the hierarchy can
look at its own data. Yaskin said that it’s important to allow each level of
that hierarchy to remain autonomous. He added that there should be a way for
“faculty members to opt out” of providing the data they got from tailored
questions to their superiors if they want. Otherwise, he said, faculty
members might be reticent to make full use of the system to find out how
courses can be improved.
Yaskin added that, if certain core outcomes are
defined by a department, the department can use the system to track the
progress of students as they move from lower to upper level courses.
Because Blackboard, which bought WebCT, has 3,650
clients, any service it can sell to its base could spread very quickly.
While details on pricing aren’t available, the assessment services will be
sold individually from course management software.
The idea of online evaluation is not new.
Blackboard has been looking to colleges already using online course
evaluations and assessments for ideas.
Washington University in St. Louis — which wasn’t
one of the consulted institutions named by Blackboard — took over five years
to develop an internal online course evaluation system. A faculty member in
the anthropology department developed templates, and other faculty members
can add specific questions. Students then have access to loads of numerical
data, including average scores by department, but the comments are reserved
for professors. Henry Biggs, associate dean of Washington University’s
College of Arts and Sciences, was involved with the creation of the system,
and said that too much flexibility can take away from the reliability of an
evaluation or assessment system.
Washington University professors have to petition
if they want their ratings withheld. “If faculty members can decide what to
make public, there can be credibility issues,” Biggs said. “It’s great for
faculty members to have a lot of options, but, essentially, by giving a lot
of options you can create a very un-level playing field.”
Biggs said that the Blackboard system could be
great for institutions that don’t have the resources to create their own
system, but that a lot of time is required of faculty members and
administrators to manage an assessment system even if the fundamental
technology is in place. “The only way it can really work is if there are
staff that are either hired, or redirected to focus entirely on getting that
set up,” Biggs said. “I don’t think you will find professors with time to do
that.”
Humphreys added that “the real time is the labor”
from faculty members, and that technology often doesn’t make things so much
easier, but may make something like assessments better. “People think of
technology as saving time and money,” Humphries said. “It rarely is that,
but it usually adds value,” like the ability to manipulate data extensively.
Some third-party course evaluation systems already
offer tons of data services.
OnlineCourseEvaluations.com has been working with
institutions — about two dozen clients currently — for around three years
doing online evaluations.
Online Course Evaluations, according to president
Larry Piegza, also allows an institution to develop follow-up questions to
evaluation questions. If an evaluation asks, for example, if an instructor
spoke audibly and clearly, Piegza said, a follow-up question asking what
could be done – use a microphone; face the students – to improve the
situation can be set to pop up automatically. Additionally, faculty members
can sort data by ratings, so they can see comments from all the students who
ripped them, or who praised them, and check for a theme. “We want teachers
to be able to answer the question, ‘how can I teach better tomorrow?’”
Piegza said.
Daily Jolt, a site that has a different
student-run information and networking page for each of about 100
institutions that host a page, is getting into the evaluation game, but the
student-run evaluation game.
Mark Miller and Steve Bayle, the president and
chief operating officer of Daily Jolt, hope to provide a more credible
alternative to RateMyProfessors.com. Like RMP, Daily Jolt’s
evaluations, which should be fully unveiled next
fall, do not verify.edu e-mail addresses, but they do allow users to rate
commentors, similarly to what eBay does with buyers and sellers, and readers
can see all of the posts by a particular reviewer to get a sense of that
reviewer.
Biggs acknowledged that student-run evaluation
sites are here to stay, but said that, given the limited number of courses
any single student evaluates, it’s unlikely that reviewing commentors will
add a lot of credibility. Miller said that faculty members will be able to
pose questions in forums that students can respond to.
“A lot of faculty members want to put this concept
[of student run evaluations] in a box and make it go away,” Miller said.
“That’s not going to happen, so we might as well see if we can do it in a
respectful way.”
Continued in article
Jensen Comment
I think course evaluations should be private information between students in a
class and the instructor. They should be required, but they should not be used
in tenure, performance, and pay evaluations. One huge problem in is that if they
are not private communications, research shows that they lead to grade
inflation. Another huge problem is that students who fill out the evaluations
are not personally accountable for lies, misguided humor, and frivolous actions.
What students want is popular teachers who are not necessarily the best medicine
for education.
Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Princeton University has announced success in its
campaign against grade inflation.
In 2004, the university announced guidelines designed
to limit the percentage of A grades, based on the belief that there were far too
many being awarded. Data released this week by the university found that in
2004-7, A grades (A+, A, A-) accounted for 40.6 percent of grades in
undergraduate courses, down from 47.0 percent in 2001-4. In humanities
departments, A’s accounted for 45.9 percent of the grades in undergraduate
courses in 2004-7, down from 55.5 percent in 2001-4. In the social sciences,
there were 37.6 percent A grades in 2004-7, down from 43.3 percent in the
previous three years. In the natural sciences, there were 35.7 percent A grades
in 2004-7, compared to 37.2 percent in 2001-4. In engineering, the figures were
42.1 percent A’s in 2004-7, down from 50.2 percent in the previous three years.
Inside Higher Ed, September 19, 2007
"Fewer A’s at Princeton," by Scott Jaschik, Inside Higher Ed,
September 20, 2005 ---
http://www.insidehighered.com/news/2005/09/20/princeton
Princeton University students need to work harder
for the A’s.
The university released results Monday
of the first year under a new grading policy, designed to
tackle the issue of grade inflation. In the last academic
year, A’s (including plus and minus grades) accounted for
40.9 percent of all grades awarded. That may not be
consistent with a bell curve, but the figure is down from
46.0 percent the previous year, and 47.9 percent the year
before that.
Princeton’s goal
is to have A’s account for less than 35 percent of the
grades awarded. Nancy Malkiel, dean of the college at
Princeton, said that based on progress during the first
year, she thought the university would have no difficulty
achieving that goal.
The data indicate that some fields
have come quite close to the target while others lag. The
only category that stayed the same the year the new policy
took effect (natural sciences) was already near the target.
Percentage of Undergraduate A’s
at Princeton, by Disciplinary Category
Discipline |
2004-5 |
2003-4 |
Humanities |
45.5% |
56.2% |
Social sciences |
38.4% |
42.5% |
Natural sciences |
36.4% |
36.4% |
Engineering |
43.2% |
48.0% |
The university did not impose
quotas, but asked each department to review grading policies
and to discuss ways to bring grades down to the desired
level. Departments in turn discussed expectations for
different types of courses, and devised approaches to use.
For independent study and thesis grades, the Princeton
guidelines expect higher grades than for regular
undergraduate courses, and that was the case last year.
Malkiel said that she wasn’t
entirely certain about the differences among disciplines,
but that, generally, it was easier for professors to bring
grades down when they evaluate student work with exams and
problem sets than with essays. She said that by sharing
ideas among departments, however, she is confident that all
disciplines can meet the targets.
Universities should take grade
inflation seriously, she said, as a way to help their
students.
“The issue here is how we do
justice to our students in our capacity as educators, and we
have a responsibility to show them the difference between
their very best work and their good work, and if we are
giving them the same grades for the very best work and for
their good work, they won’t know the difference and we won’t
stretch them as far as they are capable as stretching,” she
said.
Despite the additional pressure on
students who want A’s, she said, professors have not
reported any increase in students complaining about or
appealing the grades.
In discussions about grade
inflation nationally, junior faculty members have complained
that it is hard for them to be rigorous graders for fear of
getting low student evaluations. Malkiel said that she
understood the concern, and that Princeton’s approach — by
focusing attention on the issue — would help. “What this
institution is saying loud and clear is that all of us
together are expected to be responsible. So if you have a
culture where the senior faculty are behaving that way, it
will make it easier for the junior faculty to behave that
way.”
Melisa Gao, a senior at Princeton
and editor in chief of The Daily Princetonian, said
that student reactions to the tougher grading policy have
varied, depending on what people study. Gao is a chemistry
major and she said that the new policy isn’t seen as a
change in her department.
Professors have drawn attention to
the new policy at the beginning of courses, and Gao said
that some students say that they are more stressed about
earning A’s, but that there has not been any widespread
criticism of the shift.
Many companies are recruiting on
campus now, and Gao said that students have wondered if they
would be hurt by their lower grades. Princeton officials
have said that they are telling employers and graduate
schools about the policy change, so students would not be
punished by it.
But, Gao added, “at the end of the
day, you have a number on a transcript.”
Controversial Student Evaluations of Their Instructors
In most instances, instructors are accountable for
their grading and evaluations of students. Virtually all colleges have
grading appeals processes. Beyond internal appeals processes are courts of
law and millions of lawyers who just might help sue an instructor.
Virtually all student evaluations of instructors are
anonymous. Anonymous students are not accountable in any way
for their evaluations of instructors. I've long been in favor of
anonymous student evaluations, but I think the evaluations should only be seen
by the instructors being evaluated. My main criticism is that both
anecdotal and formal research suggest that using anonymous evaluations for
tenure, promotion, and salary decisions compromises academic
standards and course content. It's a major source of grade inflation in
the United States ---
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
When courses are evaluated by an entire class, outliers will hopefully be
"averaged out" in a variety of ways. On RateMyProfessor, the database is
filled with mostly outliers which probably accounts for the fact that most
evaluations give what constitutes either "A" grades or "F" grades implied in the
comments about instructors. There are too few responses, especially in a
given year, for "averaging out."
Are professors upset with RateMyProfessor? I doubt that most know about it
or care to know about it.
Such are some of the comments posted on
RateMyProfessors -- a 6-year-old site that archives student critiques of most
popular and least liked profs. With a database of more than 4 million ratings at
more than 5,000 institutions of higher learning, the website has become a staple
for many college students who use it to choose classes based on professors'
evaluations.
Joanna Glasner, "Prof-Ratings Site Irks Academics," Wired News, September
29, 2005 ---
http://www.wired.com/news/business/0,1367,68941,00.html?tw=wn_tophead_4
Jensen Comment
The RateMyProfessor site (for the U.S. and Canada) is at
http://www.ratemyprofessors.com/index.jsp
When this site commenced six years ago, students tried to out do each other with
humorous and highly caustic evaluations that seemingly were written more for
entertainment than serious evaluation. I sense that over time, the
evaluations are more serious and are intended, in large measure, to be more
informative about a course and an instructor. However, the site still has
a featured "Funny Ratings" tab that continues to encourage humor over
seriousness ---
http://www.ratemyprofessor.com/Funniest.html
One thing is entirely clear is that more and more professors are now being
evaluated at this site. Nearly 635,000 instructors from over 5,000 schools
are now in the database, and thousands of evaluations are being added daily.
Rules for evaluations are available at
http://www.ratemyprofessor.com/rater_guidelines.html
A continuing problem is that the evaluations are often given by outlier students
who probably got very high or very low grades from an instructor they are
praising/lambasting. A moral hazard is that really disgruntled students
may say untrue things, and that several disgruntled students may on occasion
team up to make their evaluations sound consistent. The comments are not
necessarily reflective of the sentiments of the majority of students in a
course, especially since all respondents constitute such miniscule percentage of
students in most courses across the six years of building this RateMyProfessor
database.
But after reading the evaluations of many professors that I know, I think
many students who send in comments these days want to be fair even to professors
they don't particularly like personally. Many show respect for the
instructor even if they think the course is overly hard or overly boring.
Very often student comments focus on grading where instructors are rated as
being either "very fair" or "extremely unfair with teacher's pets who get top
grades no matter what." I am always impressed when professors are rated as
being extraordinarily tough and, at the same time, receive high evaluations from
their students. Virtually none of the students appreciate a course that
features grappling with sweat-rendering ambiguity and a pedagogy of having to
learn for themselves.
Always keep in mind that it's common for students to want a cut and dried course.
This type of course is not necessarily easy, but generally it does not make
students grapple with ambiguity in content or ambiguity in the grading process.
Students always want to see the answer books or have the instructor explain the
"best solution." Unfortunately, ambiguity in content and process is
what they will later discover in the real world of adulthood.
Top MBA programs often have a better idea when assigning complex and
realistic cases where even the case writers themselves know of no right answers
and suggest that the importance of case analysis is in the process rather than
finding non-existent optimal answers. Generally the most realistic
problems in life have no optimal answers, but students hate a course where they
are not rewarded gradewise for finding best or better answers. The
well-known maxim that "it only matters how you play the game" does not apply in
the minds of students chasing "A" grades.
Except in rare instances, students are highly critical of instructors who
force students to sweat and strain finding answers on their own. For
example, in my own university there was a first-year seminar course intended for
student discussions of a new book every week. Students despised a
particular instructor who courageously never opened his own mouth in any class
other than the first class of the semester. This is
most unfortunate since learning on your own is generally the best pedagogy for
deep learning, long-term memory, creativity, and confrontations with ambiguity
---
http://faculty.trinity.edu/rjensen/265wp.htm
Because a miniscule proportion of an instructors send messages to
RateMyProfessor, the database should never be used for tenure, promotion, or
performance evaluations. Serious evaluations are impacted, in some cases
very heavily, by formal course evaluations required by colleges and universities
in all courses. Instructor evaluation is a good thing when it inspires an
instructor to improve in course preparation, course delivery, and other types of
communications with students. It is a bad thing when it motivates the
instructor to give easier courses and/or become an easier grader.
It would be interesting to know the course grades of the most negative students.
In most instances, instructors are accountable for their grading and evaluations
of students. Virtually all colleges have grading appeals processes.
Beyond internal appeals processes are courts of law and millions of lawyers who
just might help sue an instructor. Anonymous students are not accountable
in any way for their evaluations of instructors.
Evidence from research into such matters indicates that a collegiate student
evaluations do lead to easier grading in fear that low evaluations will
adversely impact upon tenure outcomes, promotions, and salaries ---
http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
I don't think RateMyProfessor has much impact on changing instructor behavior
or grading, because most professors I know either don't know about this site or
don't care to view this site because of the self-selection process of the few
students who send messages to RateMyProfessor relative to the total number of
students who do not send in messages.
There is also moral hazard if the site is ever used for serious performance
evaluations. Really unscrupulous professors might selectively request that
a few "pets" submit evaluations to RateMyProfessor, especially when he/she knows
these students will give glowing evaluations much higher than those of the
majority of the class. I don't
think this is a problem up to now, because the site is never looked at regularly
by most professors and administrators, at least not by those who I know.
But it is much easier to manipulate a few evaluations per instructor on the RateMyProfessor site relative to many evaluations when all students are asked to
evaluate a course instructor in every course.
The RateMyProfessor site might have future impact when it comes to hiring
faculty applicants seeking to change universities. If that happens it
would be most unfortunate due to the extreme limitations of the data gathering
process. Unfortunately one small rumor can destroy a career, and one small
rumor can be commenced from something discovered in RateMyProfessor.
To the extent RateMyProfessor leads to false rumors resting upon so few
respondents, this site is bad for the academy. To the extent that it leads
to popularity contests between instructors more concerned with student happiness
than student learning, this site is bad for the academy.
The one true fact in life is that our knowledge of the world has become so
vast and so complex, that the only way for students to really learn is with
sweat, tears, and inevitable frustration when dealing with ambiguities.
Students are often too ignorant (even if they are very bright) to understand
that spoon feeding is not the best way to learn. They are often to
immature to realize that the best instructors are the ones who take the time and
trouble to critique their work in depth. They are also too ignorant in
many instances to know what is very important relative to what is less important
in course content. Sometimes it takes years after graduation to be
grateful for having learned something that seemed pointless or a waste of time
years earlier.
In my own case an accounting professor named Kesselman, who I hated the worst
in college, became the professor that I belatedly, after graduation, came to
appreciate the most. And a sweet and elderly teacher named Miss Miller,
who told us so many interesting things about her life in our high school algebra
class, became the one I appreciated the least in retrospect, because I was Miss
Miller's top student who had to take algebra in my first semester at Iowa State
University when I should've been ready to plunge into calculus.
Woebegone About Grade Inflation
Grade inflation continues to occupy
the attention of the media, the academy and the public at
large. As a few Ivy League universities have adjusted
grading policies, and a few of their professors have
captured headlines with their statements on the issue,
people have taken note. Absent from this discussion,
however, are the voices of the silent majority: those who
teach at non-elite institutions, as well as those at elite
institutions who are not publicly participating in the
debate.
Janice McCabe and Brain Powell, "Woebegone About Grade
Inflation," Inside Higher Ed, July 27, 2005 ---
http://www.insidehighered.com/views/2005/07/27/mccabe
Grade Inflation and Abdication
Over the last generation, most colleges and
universities have experienced considerable grade inflation. Much lamented by
traditionalists and explained away or minimized by more permissive faculty, the
phenomenon presents itself both as an increase in students’ grade point averages
at graduation as well as an increase in high grades and a decrease in low grades
recorded for individual courses. More prevalent in humanities and social science
than in science and math courses and in elite private institutions than in
public institutions, discussion about grade inflation generates a great deal of
heat, if not always as much light. While the debate on the moral virtues of any
particular form of grade distribution fascinates as cultural artifact, the
variability of grading standards has a more practical consequence. As grades
increasingly reflect an idiosyncratic and locally defined performance levels,
their value for outside consumers of university products declines. Who knows
what an “A” in American History means? Is the A student one of the top 10
percent in the class or one of the top 50 percent? Fuzziness in grading reflects
a general fuzziness in defining clearly what we teach our students and what we
expect of them. When asked to defend our grading practices by external observers
— parents, employers, graduate schools, or professional schools — our answers
tend toward a vague if earnest exposition on the complexity of learning, the
motivational differences in evaluation techniques, and the pedagogical value of
learning over grading. All of this may well be true in some abstract sense, but
our consumers find our explanations unpersuasive and on occasion misleading.
John V. Lombardi, "Grade Inflation and Abdication," Inside Higher Ed,
June 3, 2005 ---
http://www.insidehighered.com/views/2005/06/03/lombardi
It is important to look for counter arguments that it is dysfunctional to
make students compete for high grades. Probably best article that higher
grades are not a leading scandal in higher education appears in the following
article.
"The Dangerous Myth of Grade Inflation," by Alfie Kohn, The
Chronicle of Higher Education, September
8, 2002 --- http://www.alfiekohn.org/teaching/gi.htm
Jensen's Comment: Kohn's argument
seems to boil down to a conclusion that it is immoral to make students compete
for the highest grades. But he fails to account for the fact that
virtually all universities do make students compete for A grades. There
are simply a lot more winners (in some cases about 50%) in modern times.
How does he think this makes the very best students and the students who got
below average B grades feel?
Dartmouth's Answer
On May 23, 1994 the Faculty voted that transcripts
and student grade reports should indicate, along with the grade earned, the
median grade given in the class as well as the class enrollment. Departments may
recommend, with approval of the Committee on Instruction, that certain courses
(e.g., honors classes, independent study) be exempted from this provision.
Courses with enrollments of less than ten will also be exempted. At the bottom
of the transcript there will be a summary statement of the following type:
'Exceeded the median grade in 13 courses; equaled the median grade in 7 courses;
below the median grade in 13 courses; 33 courses taken eligible for this
comparison.' This provision applies to members of the Class of 1998 and later
classes.
"Median Grades for Undergraduate Courses" --- http://www.dartmouth.edu/~reg/courses/medians/index.html
The Emperor’s Not Wearing Any
Clothes
“But he has nothing on at all,” said a little
child at last. “Good heavens! listen to the voice of an innocent child,”
said the father, and one whispered to the other what the child had said.
“But he has nothing on at all,” cried at last the whole people. That made
a deep impression upon the emperor, for it seemed to him that they were right;
but he thought to himself, “Now I must bear up to the end.” And the
chamberlains walked with still greater dignity, as if they carried the train
which did not exist.
Hans Christian Andersen New Suit," (1837) --- http://hca.gilead.org.il/emperor.html
And many students get the highest grades
with superficial effort and sometimes with humor
It may be hard to get into Harvard, but it's easy
to get out without learning much of enduring value at all. A recent graduate's
report by Ross Douthat
"The Truth About Harvard," by Ross Douthat, The Atlantic,
March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat
At the beginning of
every term Harvard students enjoy a one-week "shopping period,"
during which they can sample as many courses as they like and thus—or so
the theory goes—concoct the most appropriate schedule for their semesters.
There is a boisterous quality to this stretch, a sense of intellectual
possibility, as people pop in and out of lecture halls, grabbing syllabi and
listening for twenty minutes or so before darting away to other classes.
The enthusiasm
evaporates quickly once the shopping period ends. Empty seats in the various
halls and auditoriums multiply as the semester rattles along, until rooms
that were full for the opening lecture resemble the stadium of a losing
baseball team during a meaningless late-August game. There are pockets of
diehards in the front rows, avidly taking notes, and scattered observers
elsewhere—students who overcame the urge to hit the snooze button and
hauled themselves to class, only to realize that they've missed so many
lectures and fallen so far behind that taking notes is a futile exercise.
Better to wait for the semester's end, when they can take exhaustive notes
at the review sessions that are always helpfully provided—or simply go to
the course's Web site, where the professor has uploaded his lecture notes,
understanding all too well the character and study habits of his
seldom-glimpsed students.
Continued in article
Harvard University's grading policy is outlined at http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/grading.html
Also see http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/rank_list.html
Half the undergraduate students at Harvard get A or A- (up from a third
in 1985)
Less than 10% get a C or below
All Things Considered, November 21, 2001 · Student's
grades at Harvard University have soared in the last 10 years. According to a
report issued Tuesday by the dean of undergraduate education, nearly half of
the grades issued last year were A's or A-minuses. In 1985, just a third of
the grades were A or A-minus. Linda Wertheimer talks with Susan Pedersen, Dean
of Undergraduate Education and a Professor of History at Harvard University,
about grade inflation.
Harvard Grade Inflation, National Public Radio --- http://www.npr.org/templates/story/story.php?storyId=1133702
You can also listen to the NPR radio broadcast about
this at the above link.
Can no longer reward the very best with higher grades
Students at Harvard who easily get A's may be smarter, but with so many of
them, professors can no longer reward the very best with higher grades. Losing
this motivational tool could, paradoxically, cause achievement to fall.
"Doubling of A's at Harvard: Grade inflation or brains?" By Richard
Rothstein, The New York Times, December 5, 2001 --- http://www.epinet.org/content.cfm/webfeat_lessons20011205
A Harvard University report last spring complained
of grade inflation that makes it easier to get high grades. Now the academic
dean, Susan Pedersen, has released data showing that 49 percent of
undergraduate grades were A's in 2001, up considerably from 23 percent in
1986.
Colleges and high schools are often accused of
tolerating grade inflation, because teachers have adopted lower standards
and hesitate to confront lower-performing students. Critics warn that if
grading is too easy, learning will lag.
But grade inflation is harder to detect than it
seems.
Inflation means giving a higher value to the same
thing that once had a lower one. The Bureau of Labor Statistics tracks price
inflation, but it is not easy. Automobile
prices have gone up, but cars now have air bags and electronic ignitions.
Consumers today pay more not only for the same thing but for a better thing.
These factors are hard to untangle.
Grade inflation is similarly complicated. More A's
could be a result of smarter students. Ivy League colleges compute an
academic index for freshmen based on their College Board SAT and achievement
test scores. Harvard's index numbers have been rising, and few students have
numbers that were common at the low end of the class 15 years ago. So if
students are more proficient, there should be more A's, even if grading is
just as strict.
At Harvard, Dean Pedersen noted that students might
study harder than before, perhaps because graduate schools are more
competitive. Classes are now smaller, so better teaching could result in
better learning. More A's would then reflect more achievement, not
inflation.
What grades measure can also change. Harvard
professors now say they demand more reasoning and less memorization. Whether
or not this is desirable, higher grades that follow may not be inflationary.
Government price surveyors face similar problems when products change: if
consumers who once shopped at Sears now buy the same shirt at Nordstrom, are
they paying more for the same thing (inflation) or for a different thing
(more service)?
Dr. Pedersen agrees that higher grades may
sometimes be given for the same work. But she doubts that inflation is the
main cause of the rise in grades. Another dean, Harry R. Lewis, calculated
that Harvard grades rose as much from 1930 to 1966 as from 1967 to the
present, so the trend is not new. Neither are accusations of inflation: a
Harvard report in 1894 also warned that grades of A and B had become too
easy.
Grade inflation in high schools is elusive as well.
RAND researchers found there was actually some national grade deflation from
1982 to 1992 — students with the same math scores got lower grades at the
end of the period than at the start.
But seniors with similar scores on entrance exams
(the SAT and ACT) now have slightly higher grades than before. Perhaps this
inconsistency results from inflation affecting top students (those likely to
take the exams) more than others. Or perhaps grades deflated from 1982 to
1992, but inflated at other times.
Since 1993, the State of Georgia has given free
college tuition to students with B averages. Critics say grade inflation
resulted because, with B's worth a lot of money, high school teachers now
give borderline students a greater benefit of the doubt.
But if a promise of scholarships led students to
work harder, higher grades would not signal inflation. And indeed, one study
found that Georgia's black students with B averages had higher SAT scores
than before the program began.
Even if inflation is less than it seems, rising
grades pose a problem that rising prices do not. Prices can rise without
limit, but grades cannot go above A+. When more students get A's, grades no
longer can show which ones are doing truly superior work. This is called
"grade compression" and is probably a more serious problem than
inflation.
Students at Harvard who easily get A's may be
smarter, but with so many of them, professors can no longer reward the very
best with higher grades. Losing this motivational tool could, paradoxically,
cause achievement to fall.
Continued in the article
Students get two grades from
Harvey Mansfield at Harvard University
"The Truth About Harvard," by Ross Douthat, The Atlantic,
March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat
Bob Jensen's threads on grade inflation are at http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
He paused, flashed
his grin, and went on. "Nevertheless, I have recently decided that
hewing to the older standard is fruitless when no one else does, because all
I succeed in doing is punishing students for taking classes with me.
Therefore I have decided that this semester I will issue two grades to each
of you. The first will be the grade that you actually deserve —a C for
mediocre work, a B for good work, and an A for excellence. This one will be
issued to you alone, for every paper and exam that you complete. The second
grade, computed only at semester's end, will be your, ah, ironic grade —
'ironic' in this case being a word used to mean lying —and it will be
computed on a scale that takes as its mean the average Harvard grade, the
B-plus. This higher grade will be sent to the registrar's office, and will
appear on your transcript. It will be your public grade, you might say, and
it will ensure, as I have said, that you will not be penalized for taking a
class with me." Another shark's grin. "And of course, only you
will know whether you actually deserve it."
Mansfield had been
fighting this battle for years, long enough to have earned the sobriquet
"C-minus" from his students, and long enough that his frequent
complaints about waning academic standards were routinely dismissed by
Harvard's higher-ups as the out-of-touch crankiness of a conservative fogey.
But the ironic-grade announcement changed all that. Soon afterward his photo
appeared on the front page of The Boston Globe, alongside a story about the
decline of academic standards. Suddenly Harvard found itself mocked as the
academic equivalent of Garrison Keillor's Lake Wobegon, where all the
children are above average.
You've got to be unimaginatively
lazy or dumb to get a C at Harvard (less than 10% get below a B-)
Harvard does not admit dumb students, so the C students must be unimaginative,
troubled, and/or very lazy.
It doesn't help that Harvard students are
creatively lazy, gifted at working smarter rather than harder. Most of my
classmates were studious primarily in our avoidance of academic work, and
brilliant largely in our maneuverings to achieve a maximal GPA in return for
minimal effort.
"The Truth About Harvard," by Ross Douthat, The Atlantic,
March 2005 --- http://www.theatlantic.com/doc/print/200503/douthat
This may be partly
true, but I think that the roots of grade inflation —and, by extension,
the overall ease and lack of seriousness in Harvard's undergraduate academic
culture —run deeper. Understanding grade inflation requires understanding
the nature of modern Harvard and of elite education in general
—particularly the ambitions of its students and professors.
The students'
ambitions are those of a well-trained meritocratic elite. In the
semi-aristocracy that Harvard once was, students could accept Cs, because
they knew their prospects in life had more to do with family fortunes and
connections than with GPAs. In today's meritocracy this situation no longer
obtains. Even if you could live off your parents' wealth, the ethos of the
meritocracy holds that you shouldn't, because your worth as a person is
determined not by clan or class but by what you do and whether you succeed
at it. What you do, in turn, hinges in no small part on what is on your résumé,
including your GPA.
Thus the professor
is not just a disinterested pedagogue. As a dispenser of grades he is a
gatekeeper to worldly success. And in that capacity professors face upward
pressure from students ("I can't afford a B if I want to get into law
school"); horizontal pressure from their colleagues, to which even
Mansfield gave way; downward pressure from the administration ("If you
want to fail someone, you have to be prepared for a very long, painful
battle with the higher echelons," one professor told the Crimson); and
perhaps pressure from within, from the part of them that sympathizes with
students' careerism. (Academics, after all, have ambitions of their own, and
are well aware of the vicissitudes of the marketplace.)
It doesn't help
that Harvard students are creatively lazy, gifted at working smarter rather
than harder. Most of my classmates were studious primarily in our avoidance
of academic work, and brilliant largely in our maneuverings to achieve a
maximal GPA in return for minimal effort. It was easy to see the classroom
as just another résumé-padding opportunity, a place to collect the grade
(and recommendation) necessary to get to the next station in life. If that
grade could be obtained while reading a tenth of the books on the syllabus,
so much the better.
February 21, 2005 message from Bob Jensen
Below is a message from the former Dean of Humanities at Trinity
University. He’s now an emeritus professor of religion.
In particular, he claims Harvard had an A+ grade for recognizing the very
top students in a course. I think Harvard and most other universities have
dropped this grade alternative.
Second he claims that there was a point system attached to the grades. Note
especially the gap in the point weightings between A- and B+ and C- and D+.
If this was used at Harvard for a period of time, it was dropped somewhere
along the way. This is unfortunate because it created a means by which the top
(A+) students could be recognized apart from those many A students and those
average students (the median grade at Harvard is now A-). The point system
provided a means of breaking down the many 4.0 gpa graduates at Harvard.
Harvard University's current grading policy is outlined at http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/grading.html
Also see http://www.registrar.fas.harvard.edu/handbooks/instructor.2003-2004/chapter5/rank_list.html
Bob Jensen
-----Original Message-----
From: Walker, Wm O.
Sent: Sunday, February 20, 2005 3:50 PM
To: Jensen, Robert Subject: RE: Bill Walker Question
Bob, all I know is what my son told me while he was an undergraduate
student at Harvard (1975-1979). As I recall, the scale was the following:
15 A+
14 A
13 A-
11 B+
10 B
9 B-
7 C+
6 C
5 C-
3 D+
2 D
1 D-
They may have changed the system sometime during the past twenty-six years.
I particularly like it because it not only gives the plus and minus grades but
also makes a greater distinction between A- and B+ than between B+ and B, etc.
Question
How do Princeton, Dartmouth and some other universities deal with grade
inflation, at least ?
Princeton
University takes a (modest) stand on grade inflation
"Deflating the easy 'A'," by Teresa Méndez, Christian Science
Monitor, May 4, 2004 --- http://www.csmonitor.com/2004/0504/p12s02-legn.html
For an analysis of this see http://faculty.trinity.edu/rjensen/assess.htm#GradeInflation
Answers
as of 1996
The answers as of 1996 lie buried in the online article at http://www.princeton.edu/~paw/archive_old/PAW95-96/11_9596/0306note.html#story4
Or are professors just pressured to give out more
A's?
Students are getting smarter-or so it seems by the
increasingly higher grades they're receiving. Last year, undergraduates
earned 8 percent more A's than they did just seven years ago and more than
twice as many as they did in 1969-70. In 1994-95, 41 percent of all grades
awarded were A's and 42 percent were B's, according to the Office of the
Registrar.
Princeton didn't invent grade inflation. According
to Registrar C. Anthony Broh, it's a phenomena of private highly selective
institutions. Yet at the same time as grades are creeping up at Princeton,
undergraduate grades nationwide have been going down, according to a federal
study released last October. The drop, said Clifford Adelman, a senior
research analyst for the Department of Education, is due to a 37 percent
increase in the number of people attending college.
Public colleges aren't experiencing grade
inflation-a continual increase in the average grade, explained Broh-at the
same rate as highly selective institutions, because their curricula are
structured differently. Ohio State's curriculum, for example, is designed to
weed out students, said Broh.
Princeton saw grades inflate in the late 1960s and early 1970s. The
percentage of all grades that were A's jumped from 17 percent in 1969-70 to
30 percent in 1974-75. Students earned higher grades at Princeton and other
institutions, in part, because of the Vietnam War. Students whose
grade-point averages dropped too low were drafted, said Broh, "so
faculty generally felt pressure" to give high marks.
The percentages among grades remain-ed fairly
constant from the late 1970s through the early 1980s. In 1987-88, 33 percent
of grades were A's. Since then, grades have risen at about the same rate as
they did during the early 1970s. The primary reason for the jump, said Broh,
is that professors feel some pressure from students to give higher grades so
they can better compete for admission to graduate and professional schools.
Princeton's grade distribution is comparable to that of its peer
institutions. At Dartmouth the percentage of all grades that are A's rose
from 33 percent in 1977-78 to 43 percent in 1993-94, according to Associate
Registrar Nancy Broadhead. At Harvard, the hybrid grade A/A- represented 22
percent of all grades in 1966-67 and 43 percent in 1991-92, said spokeswoman
Susan Green. C's have virtually disappeared from Harvard transcripts,
reported Harvard Magazine in 1993.
Students aren't the only ones who apply subtle pressure to professors.
Several years ago, an instructor of linear algebra gave a third of the class
C's, and there was "a big uproar," said Joseph J. Kohn *56, the
chairman of the mathematics department. He received a "long
letter" from a dean who suggested that that kind of grading would
discourage the students.
Ten years ago, a third of a class earning C's was
normal, said Kohn. Professors feel they're supposed to grade
"efforts," not the product, he added.
Another reason for grade inflation, said Broh, is
that students are taking fewer courses Pass/D/Fail, which since 1990-91 have
been limited to one per term for each student. Therefore, students are
earning more A's and B's and fewer P's.
Some observers believe that students are just
smarter than they were 25 years ago, and they're working harder. The SAT
scores continue to rise, noted Broh.
Even if a professor wanted to "deflate"
grades, one person can't expect to "unilaterally try to reinvent
grading," said Lee C. Mitchell, the chairman of the English department.
One professor alone would be "demonized," if he or she tried to
grade "accurately," said Clarence F. Brown, Jr., a professor of
comparative literature. "The language of grading is utterly
debased," he added, noting that real grading is relegated to letters of
recommendation, a kind of "secret grading."
Not every professor and student on campus has succumbed to grade inflation,
however. In the mind of Dean of the School of Engineering and Applied
Science James Wei, a C is still average. Professors in the engineering
school still regularly give grades below B's, though "students are
indignant," he said.
According to Dean of the College Nancy Weiss Malkiel, the university
periodically reviews grade distribution. The administration encourages
faculty members to think carefully about grading patterns, but "we
don't tell [them] what grades to give," said Malkiel.
Harvard isn't planning on doing anything about the shift in grades, said
Green. Dartmouth, however, last year changed its grading policy. In an
effort to assess student performance more effectively, report cards and
transcripts now include not only grades, but also the median grade earned by
the class and the size of the class. The change may also affect grade
inflation, but it's too soon to tell if it has, said Broadhead.
In the end, perhaps grade inflation is inconsequential. As Kohn said,
"The important thing is what students learn, not what [grades] they
get." And as Dean of the Faculty Amy Gutmann told The Daily
Princetonian, "There is no problem [with grade inflation] as long as
grades reflect the quality of work done."
Chart: The graphic is not available online
Infografic by Jeff Dionise; Source: Office of the Registrar
This chart, provided by the Office of the
Registrar, shows the percentage of grades awarded over the last 25 years.
The percentage of A's and B's increased markedly in the late 1960s and early
1970s and again since the late 1980s. The percentage of P's (pass) dropped
dramatically in the early 1970s, in part because the Pass/D/Fail option lost
favor among students for fear that those evaluating their academic careers
would think they took lighter loads, said Registrar C. Anthony Broh. Also,
the university now allows fewer courses to be taken Pass/D/Fail. The
percentage of P's peaked in 1969-70, when students went on strike during the
Vietnam War and sympathetic faculty gave them the option of receiving either
a P or a normal grade. Many students opted for P's, said Broh.
Are Students Getting Smarter?
Or are professors just pressured to give out more A's?
The real issue isn't grade inflation, said Registrar C. Anthony Broh, it's
grade "compression." Because most grades awarded are A's and B's,
it's hard to differentiate between students at the top of a course.
February 20, 2005 reply from Glen Gray [glen.gray@CSUN.EDU]
If you are worried about grade inflations,
think about this: a dean of a well-known research university (sorry, I can’t
say who) sent a memo to his faculty suggesting that they RAISE the average GPA
because grade inflation at other institutions are putting his students at a
competitive disadvantage. So, now we may have a race to who has the highest
average GPA.
February 20, 2005 reply from Roger Collins [rcollins@CARIBOO.BC.CA]
I think the following is unlikely to fly in the
continuous assessment environment, but for seven years in the 70's/80s I
taught at a UK institution where all major exams (these accounted for around
80% of course marks and held once per year) were double marked - once by the
instructor directly responsible for the class and once by an associate from
the same department. Exam results were also reviewed by a committee
responsible for the degree, and samples sent off to an external examiner (one
of our externals was a certain David Tweedie).
This method is VERY effective at combating student
pressure on instructors, but fairly time-consuming; unless Faculty accept it
(we did) as part of normal work-load it may also become expensive....
Regards,
Roger
Roger Collins
Associate Professor UCC (soon to be TRU) School of Business
No wonder kids take the easy way out: The era of work and sacrifice
is long gone
The pressure for U.S. high schools to toughen up is
growing. But when schools respond with stiffened requirements, as many have done
by instituting senior projects, they often find that students and parents aren't
afraid to fight back.
Robert Tomsho, "When High Schools Try Getting Tough, Parents Fight
Back," The Wall Street Journal, February 8, 2005, Page A1 --- http://online.wsj.com/article/0,,SB110782391032448413,00.html?mod=todays_us_page_one
In Duvall, Wash., Projects Required Months of Work -- Then Parental Protests Kicked
In
Fearing your student evaluations, how much time and trouble should you
devote to email questions from your students?
For junior faculty members, the barrage of e-mail has
brought new tension into their work lives, some say, as they struggle with how
to respond. Their tenure prospects, they realize, may rest in part on student
evaluations of their accessibility. The stakes are different for professors
today than they were even a decade ago, said Patricia Ewick, chairwoman of the
sociology department at Clark University in Massachusetts, explaining that
"students are constantly asked to fill out evaluations of individual faculty."
Students also frequently post their own evaluations on Web sites like
www.ratemyprofessors.com and describe their
impressions of their professors on blogs.
Jonathan D. Glater, "To: Professor@University.edu Subject: Why It's All About
Me," The New York Times, February 21, 2006 ---
http://www.nytimes.com/2006/02/21/education/21professors.html
Bob Jensen's threads on the dark side of education technology ---
http://faculty.trinity.edu/rjensen/000aaa/theworry.htm
Reed College, a selective liberal arts college in
Oregon, where the average grade-point average has remained a sobering 2.9 (on a
4.0 scale) for 19 years.
See below
Valen E. Johnson, a biostatistics professor at the
University of Michigan and author of "Grade Inflation: A Crisis in College
Education" (Springer Verlag), said the use of student ratings to evaluate
teachers also inflates grades: "As long as our evaluations depend on their
opinion of us, their grades are going to be high."
See below
Administrators and some faculty at some of the country's top universities
have proposed correcting for so-called grade inflation by limiting A's.
It's relatively easy to get an A at Princeton, but it's easier at Harvard.
"Is It Grade Inflation, or Are Students Just Smarter?" by Karen W.
Arenson, The New York Times, April 18, 2004 --- http://www.nytimes.com/2004/04/18/weekinreview/18aren.html
MILLION dollars isn't what it used to be, and neither
is an A in college.
A's - including A-pluses and A-minuses - make up
about half the grades at many elite schools, according to a recent survey by
Princeton of the Ivy League and several other leading universities.
At Princeton, where A's accounted for 47 percent of
grades last year, up from 31 percent in the 1970's, administrators and some
faculty have proposed correcting for so-called grade inflation by limiting A's
to 35 percent of course grades.
Not everyone is convinced there is a problem. A
recent study by Clifford Adelman of the United States Department of Education
concluded that there were only minor changes in grade distributions between
the 1970's and the 1990's, even at highly selective institutions. (A bigger
change, he said, was the rise in the number of students withdrawing from
courses and repeating courses for higher grades.)
Alfie Kohn, author of the coming book "More
Essays on Standards, Grading and Other Follies" (Beacon Press), says
rising grades "don't in itself prove that grade inflation exists.''
"It's necessary to show - and, to the best of my
knowledge, it has never been shown - that those higher grades are
undeserved,'' he said.
Is it possible that the A students deserve their
A's?
Getting into colleges like Princeton is far more
difficult than it used to be. And increasing numbers of students are being
bred like racehorses to breeze through standardized tests and to write essays
combining Albert Einstein's brilliance with Mother Teresa's compassion.
Partly to impress admissions officers, students are
loading up on Advanced Placement courses. The College Board said the number
taking 10 or more such courses in high school is more than 10 times what it
was a decade ago. And classes aimed at helping them do better on the SAT exams
are booming.
"Back in 1977, when I graduated from high
school, it had to be less than 25,000 students nationally who spent more than
$100 on preparing for the SAT," said John Katzman, founder and chief
executive of The Princeton Review, which tutors about 60,000 students a year
for the SAT's. "It was the C students who prepped, not the A
students," he added. "Now it's got to be circa 200,000 or 250,000
students who are going to spend more than $400 to prepare for the SAT."
But Wayne Camara, vice president of research at the
College Board, said that while students are increasingly well prepared,
"that in no way accounts for the shift in grades we are seeing.''
"Grades are not like temperatures or weights,''
he said. "What constitutes an A or a B has changed, both in high school
and in college."
He said teachers are aware of how competitive the
academic world has become and try to help students by giving better grades.
"If you graduated from college in the 1950's and you wanted to go to law
school or a graduate program, you could," Dr. Camara said. "Today it
is very difficult. You are not going to be able to graduate from Harvard or
Princeton with a 2.8 grade point average and get into Georgetown Law."
In addition, one recent Princeton graduate who works
in investment banking and has participated in recruiting meetings cautioned in
a letter to The Daily Princetonian that hiring practices can be superficial,
and that grade-point averages are one of the first items scrutinized on a
résumé.
Stuart Rojstaczer, a geology professor at Duke who
runs the Web site www.Gradeinflation.com, says that higher grades are the
result of a culture where the student-consumer is king. "We don't want to
offend students or parents," he said. "They are customers and the
customer is always right."
Valen E. Johnson, a biostatistics professor at the
University of Michigan and author of "Grade Inflation: A Crisis in
College Education" (Springer Verlag), said the use of student ratings to
evaluate teachers also inflates grades: "As long as our evaluations
depend on their opinion of us, their grades are going to be high."
Even if the Princeton plan is approved, Professor
Johnson, who unsuccessfully tried to lower grades at Duke University a few
years ago, cautioned that reform is difficult. "It is not in the interest
of the majority to reform the system," he said. "Assigning grades,
particularly low grades, is tough, and it requires more work, since low grades
have to be backed up with evidence of poor performance."
But Princeton and others may take some comfort from
Reed College, a selective liberal arts college in Oregon, where the average
grade-point average has remained a sobering 2.9 (on a 4.0 scale) for 19 years.
The college says it ranks third among all colleges
and universities in the proportion of students who go on for Ph.D.s, and has
produced more than 50 Fulbright Scholars and 31 Rhodes scholars.
Still, Colin S. Diver, Reed's president, says
graduate schools worried about their rankings are becoming less willing to
take students with lower grades because they make the graduate schools appear
less selective.
"If they admit someone with a 3.0 from Reed who
is in the upper half of the class, that counts against them, even if it is a
terrific student," Mr. Diver said. "I keep saying to my colleagues
here that we can hold ourselves out of the market for only so long."
This might set a legal precedent for all colleges and universities.
It also might root out instructors who give high grades in hopes of higher
student evaluations.
The student newspaper at Oklahoma State University has
won a three-month fight to get records in an electronic format regarding the
grades professors give students. School officials say the names of the
students will be blacked out. Sean Hill, a journalism student and editor
of The Daily O'Collegian, requested the information in November so he could
compare the average grades of different sections of the same classes. The
Oklahoman in Oklahoma City and the Tulsa World joined in the request.
School officials said at the time they would provide the records, but not in an
electronic format. OSU spokesman Gary Shutt now says the school can provide
records in the electronic format without jeopardizing student privacy and
confidentiality.
Editor and Publisher, February 10, 2005 --- http://www.editorandpublisher.com/eandp/news/article_display.jsp?vnu_content_id=1000798377
Jensen Comment:
Backed by studies such as the huge studies at Duke and other colleges mentioned
below, I've long
contended that student evaluations, when they heavily impact tenure granting and
performance evaluations, present a moral hazard and lead to grade inflation
across the campus. But I'm wary of using data such as that described above
to root out "easy" graders. Some instructors may be giving out
higher grades after forcing out weaker students before the course dropping
deadline and/or by scaring off weak students by sheer reputation for being
tough.
My solution, for the moral hazard of grade inflation caused by fear
of student course evaluations, entails having colleges create Teaching Quality
Control (TQC) Departments that act in strict confidentiality when counseling
instructors receiving low student evaluations. The student evaluations
themselves should be communicated only to the TQC Department and the
instructors. Because of moral hazard, student evaluations should not be
factored into tenure decisions or performance evaluations. In order to
evaluate teaching for tenure and performance evaluations, instructors should
take turns sitting in other instructor courses with the proviso that no
instructor sits in on a course in that instructor's department/school. In
other words, English instructors should sit in on accounting courses and vice
versa. This need not entail sitting in on all classes, and incentives must
be provided for faculty to take on the added workload.
I think that this coupled with a TOC confidential counseling operation will
make good teachers even better as well as reduce bad teaching and grade
inflation that has caused some schools like Princeton University to put caps on
the number of A grades. Corporations have Quality Control
departments. Why shouldn't colleges try improving quality in a similar
manner? The average course grade across many campuses is now a B+ to an
A-. Student evaluations are a major factor, if not the major factor, in
giving a C grade a bad name. I'm not saying that professors with high
student evaluations are easy graders. What I am saying is that weak or
unprepared teachers are giving easy grades to improve their own students'
evaluations.
Student evaluations of instructors are even more of a moral hazard when they
are made available to students and the public at large.
Some Reasons Harvard University Does Not Require Student
Evaluations
Student course evaluations are ubiquitous these days, whether they be at a
national site like ratemyprofessors.com or sponsored by individual institutions.
But Harvard University faculty members are split on whether evaluations should
be mandatory . . . Harvey C. Mansfield, a professor of government, reminded
colleagues at the Tuesday meeting that there are plenty of pitfalls to
evaluations. He said that evaluations promote “the rule of the less wise over
the more wise … on the assumption students know best.” Mansfield called
requiring evaluations an “intrusion on the sovereignty of the classroom,” and
said that evaluations “reward popular teachers at the expense of serious
teachers … popular teachers can be serious but many are not, and many teachers
are serious but not popular.” Mansfield added that he would like to hear more
discussion of evaluations, and to see their role diminished rather than
increased.
David Epstein, "One Size Doesn’t Fit All," Inside Higher Ed, May 4, 2006
---
http://www.insidehighered.com/news/2006/05/04/harvard
Is
grade deflation hitting the Ivy League?
"Deflating the
easy 'A'," by Teresa Méndez, Christian Science Monitor, May 4, 2004
--- http://www.csmonitor.com/2004/0504/p12s02-legn.html
Princeton students fear that a tough
stance on grades may harm campus culture - and limit their appeal to graduate
schools.
When Adam Kopald
exits Princeton University's gothic gates as a graduate in June 2005, he will
not have a GPA. Nor will he be assigned a class rank. He may not even know the
grades of his closest friends.
It's this lack of
competition, say Princeton students, that has made for a much less cutthroat
environment than one might expect from one of the country's most academically
elite universities.
Some students argue
that that's been a good thing for their school, where they say they strive to
do their own best work rather than to outdo one another - but it's a luxury
they now fear losing.
A new grading policy,
to go into effect next year, will reduce the number of A-pluses, A's, and
A-minuses for all courses to 35 percent, down from the current 46 percent. A's
given for independent work will be capped at 55 percent.
"There's
definitely going to be a competition that didn't exist before," says Mr.
Kopald, a history major. "Because any way you cut it, there are only 35
percent of people who are going to get A's."
At a time when
campuses are clamoring to appear more interested in the whole person,
students' mental health, and well-rounded development, some wonder if the
message being sent by instituting quotas isn't contradictory.
School
administrators, however, argue that grade inflation cannot be ignored.
Princeton first examined the problem six years ago.
"Our feeling
then was that we could just let it go, and over the next 25 years everyone
would be getting all A's," says Nancy Weiss Malkiel, dean of the college.
"But would that really be responsible in terms of the way we educated our
students?"
According to Dean
Malkiel, the goals of 35 percent and 55 percent will align the number of A's
granted with figures from the late 1980s and early '90s.
Other schools have
tried to address grade inflation, using measures like including contextual
information on transcripts, says Malkiel. And in 2002, Harvard limited
students graduating with honors to 60 percent. But as far as Malkiel knows,
this is the first widespread move to stem the trend of upward spiraling grades
that dates back to the 1970s.
What
caused grades to inflate
Experts blame grade
inflation on everything from fears of the draft during the Vietnam War to a
consumer mentality that expects higher marks in exchange for steeper tuition.
But some professors
say students today are increasingly bold about haggling for higher marks.
Often it's easier to give an A-minus instead of a B-plus than to argue.
Malkiel also says a
broader culture of inflation may be a factor. Everything from high school GPAs
to SAT scores have been on the rise.
But not all see the
phenomenon of rising grades as a bad thing. William Coplin, a professor at the
Maxwell School at Syracuse University, feels strongly there are a number of
reasons why grade inflation is not just acceptable - but good.
He says that students
learn in the classroom less than half of what they need to know for real life.
Distributing higher grades gives them room to explore other areas of interest
and to develop as people.
"Most students
do not see college as a place to develop skills. They see it as a place to get
a degree and have a high GPA," he says. "The truth is, skills are
more important than GPA." Professor Coplin worries that attempting to
stamp out grade inflation is simply "making the kids even crazier about
grades."
Annie Ostrager, a
politics major at Princeton, isn't convinced that grade inflation is a problem
either.
"I personally
have not perceived my grades to be inflated," says the junior. "I
work hard and get good grades. But I don't really feel like grades are flying
around that people aren't earning."
But most Princeton
students acknowledge there is a problem - although many doubt that quotas are
the best solution.
Matt Margolin,
president of the student government, estimates that 325 of the 350 e-mails he
has received from Princeton students express frustration with the new grading
policy.
Princeton isn't alone
in the battle against inflated grades. A study last year found that A's
accounted for 44 to 55 percent of grades in the Ivy League, MIT, Stanford, and
the University of Chicago.
Will
Princeton stand alone?
Yet by drawing public
attention to Princeton in particular, students worry it may come to be seen as
the most flagrant example.
"Putting it in
the public light like this has really damaged the image of a Princeton
transcript," says Robert Wong, a sophomore studying molecular biology.
Malkiel has assured
students this isn't true. In conversations with admissions officers at
graduate schools, employers, and fellowship coordinators across the country,
she says she has been told "that they would know going forward that a
Princeton A was a real A." They even suggested that tougher grading will
ultimately benefit Princeton students.
But not everyone is
convinced.
"I would like to
go to law school, so my eye has been on this proposal very carefully,"
says Mr. Margolin, a junior and a politics major. "My understanding is
that law school decides your fate based mostly on GPA and LSAT scores."
"A call for an end to grade
inflation," by Mary Beth Marklein, USA Today, May 2, 2002 --- http://www.usatoday.com/news/health/2002-02-05-grade-inflation.htm
At Harvard
University, a recent study found that nearly half of all grades awarded were A
or A-minus.
A tenured professor
is suing Temple University, saying he was fired because he wouldn't make his
courses easier or give students higher grades.
And now, a new report
prepared by the American Academy of Arts & Sciences says it's time to put
an end to grade inflation.
Concerns about grade
inflation, defined as an upward shift in the grade-point average without a
corresponding increase in student achievement, are not new. The report cites
evidence from national studies beginning as early as 1960. And while it is a
national phenomenon, authors Henry Rosovsky, a former Harvard dean, and
Matthew Hartley, a lecturer at the University of Pennsylvania, say the
phenomenon is "especially noticeable" in the Ivy League.
They blame the rise
of grade inflation in higher education on a complex web of factors, including:
An administrative
response to campus turmoil in the 1960s, and a trend, begun in the 1980s, in
which universities operate like businesses for student clients.
The advent of student
evaluations of professors and the increasing role of part-time instructors.
Watered-down course
content, along with changes in curricular and grading policies.
"At first glance
(grade inflation) may appear to be of little consequence," the authors
write. But it "creates internal confusion giving students and colleagues
less accurate information; it leads to individual injustices (and) it may also
engender confusion for graduate schools and employers." They say schools
should establish tangible and consistent standards, formulate alternative
grading systems and create a standard distribution curve in each class to act
as a yardstick.
Rosovsky and Hartley's report is
available at www.amacad.org/publications/occasional.htm.
May 4, 2004 reply from Hertel, Paula [phertel@trinity.edu]
I just now heard on
NPR an interview with one of the Princeton faculty who voted for the new
policy to limit A’s to 35%. She (a professor of economics) pointed out that
one of the biggest factors in establishing grade inflation is the perception
of faculty that course evaluations will be lower if grades are lower. We
should add that, even if the perception is wrong, it’s existence and
influence does our students no favor in the long run.
It’s the nature of
the course evaluations that must change!
Paula
May 4, 2004 reply from Bob Jensen
Trinity University
professors may have too much integrity to allow student evaluations to inflate
grades. However, we do have marked grade inflation caused by something.
Research studies at other universities found that tough graders take a beating
on course evaluations:
Duke University Study --- http://www.aas.duke.edu/development/Miscellaneous/grades.html
Lenient
graders tend to support one theory for these findings: students with good
teachers learn more, earn higher grades and, appreciating a job well done,
rate the course more highly. This is good news for pedagogy, if true. But
tough graders tend to side with two other interpretations: in what has
become known as the grade attribution theory, students attribute success
to themselves and failure to others, blaming the instructor for low marks.
In the so-called leniency theory, students simply reward teachers who
reward them (not because they're good teachers). In both cases, students
deliver less favorable evaluations to hard graders.
University
of
Washington
Study ---
http://www.washington.edu/newsroom/news/k120497.html
"Our
research has confirmed what critics of student ratings have long
suspected, that grading leniency affects ratings. All other things being
equal, a professor can get higher ratings by giving higher grades,"
adds Gillmore, director of the UW's office of educational assessment.
The two
researchers' criticisms, which are counter to much prevailing opinion in
the educational community, stem from a new study of evaluations from 600
classes representing the full spectrum of undergraduate courses offered at
the UW. Their study is described in a paper being published in the
December issue of the Journal of Educational Psychology and in two papers
published in a special section edited by Greenwald in the November issue
of the American Psychologist.
Rutgers University --- http://complit.rutgers.edu/palinurus/
An article that
drew a lot of responses in the media. Among other things, the author
claims that "Some departments shower students with A's to fill poorly
attended courses that might otherwise be canceled. Individual professors
inflate grades after consumer-conscious administrators hound them into it.
Professors at every level inflate to escape negative evaluations by
students, whose opinions now figure in tenure and promotion
decisions."
Archibold, Randal
C. "Just Because the Grades Are Up, Are Princeton Students
Smarter?" The New York Times (Feb 18, 1998), Sec: A P. 1.
A long article
following a report on Princeton’s grade inflation. Includes a
presentation of possible reasons for the phenomenon.
Goldin, Davidson.
"In A Change of Policy, and Heart, Colleges Join Fight Against
Inflated Grades." The New York Times (Jul 4, 1995), Sec: 1 P.
8.
The article
presents the tendency of elite institutions to follow Stanford and
Dartmouth’s lead in fighting Grade Inflation. Brown stands out in
refusing the trend by making the transcripts reflect achievements only.
The rational: "'When you send in your resume, do you put down all the
jobs you applied for that you didn't get?' said Sheila Blumstein, Brown's
dean. 'A Brown transcript is a record of a student's academic
accomplishments.'"
University of Montana --- http://www.rtis.com/reg/bcs/pol/touchstone/november97/crumbley.htm
The mid-term
removal of a chemistry instructor at the University of Montana in 1995
because he was "too tough" illustrates the widespread grade
inflation in the United States. Grade inflation will not diminish until
the root cause of grade inflation and course work deflation is eliminated:
widespread use of anonymous student evaluations of teaching (SET). If an
instructor calls a student stupid by giving low marks, it is unlikely the
student will evaluate the instructor highly on an anonymous questionnaire.
As more and
more research questions the validity of summative SET as an indicator of
instructor effectively, ironically there has been a greater use of
summative SET. A summative SET has at least one question which acts as a
surrogate for teaching effectiveness. In 1984, two-thirds of liberal arts
colleges were using SET for personnel decisions, and 86% in 1993. Most
business schools now use SET for decision making, and 95% of the deans at
220 accredited undergraduate schools "always use them as a source of
information," but only 67% of the department heads relied upon them.
Use of SET in higher education appears frozen in time. Even though they
measured the wrong thing, they linger like snow in a shaded corner of the
back yard, refusing to thaw.
Mixed opinions voiced in The
Chronicle of Higher Education (not usually backed by a formal study) ---
http://chronicle.com/colloquy/98/evaluation/re.htm
CONCLUSIONS
Causes of grade inflation are
complex and very situational in terms of discipline, instructor integrity,
pedagogy, promotion and tenure decision processes, course demand by students,
pressures to retain tuition-paying students, etc. I suspect that if I dig
harder, there will be a few studies attempting to contradict the findings above.
One type of contradictory study does not impress me on this
issue of grade inflation. That is a study of the instructors rated
highest by students, say the top ten percent of the instructors in the college.
Just because some, or even most, of those highly-rated instructs are also hard
graders does not get at the root of the problem. The
problem lies with those instructors getting average or below evaluations that
see more lenient grading as a way to raise student evaluations.
One thing is absolutely clear in my
mind is that teaching evaluations are the major cause of system-wide grade
inflation. My opinion is in part due to the explosion in grade inflation
that accompanied the start of anonymous course evaluations being reported to
administrators and P&T committees. In the 1960s and 1970s we had
course evaluations in most instances, but these were always considered to be
private information owned only by the course instructors who were generally
assumed to be professionally responsible enough to seriously consider the
evaluation outcomes in private.
There are no simple solutions to grade inflation. The
Princeton
35% cap on A grades is not a solution if some members of the faculty just
refuse to abide by the cap (and faculty are a know to be proudly independent).
Grades are highly motivational and, as such, motivate for different purposes in
different situations. Student evaluations of faculty serve different
purposes and, as such, motivate faculty for different purposes in different
situations.
I have no solution to recommend at
the moment for grade inflation. But I would like to recommend that my own
university, Trinity University, consider adopting an A+ grade with a cap of 10%
(not rounded) in each class. For example, a class with 19 students would
be allowed to have one A+ student; a class with 20 students could have two
A+ students. The A+ would not be factored into the overall gpa, but it
would be recorded on a student's transcript. This would do absolutely
nothing to relieve grade inflation. But it would help to alleviate the
problem of having exceptional students in a class lose motivation to strive
harder for the top grade. One of the problems noted in the Duke,
Washington, and Rutgers studies is that exceptional students don't strive as
hard after they are assured of getting the highest grade possible in the class.
Why not make them strive a little bit harder?
It was just plain tougher in the
good old days. Some sobering percentages about grade inflation --- http://www.cybercollege.com/plume3.htm
In 1966 at Harvard,
22% of all grades were A's. In 2003, that figure had grown to 46%. In 1968 at
UCLA, 22% of all grades were A's. By 2002, that figure was 47%.
The so-called
Ivy League schools, MIT, Stanford, and the University of Chicago, averaged 50%
A's (in recent years).
The most immediate
effect of giving almost 50% A's is that exceptional students see little reason
to try to excel. They know they can "coast their way" to an A
without really being challenged.
Awarding
students A's for C+ work robs the best and the brightest.
Prof. Roger Arnold --- http://www.cybercollege.com/plume3.htm
May 4, 2004 reply from David R. Fordham
[fordhadr@JMU.EDU]
RIGHT on!
Back when I was
program director, it was empirically demonstrable that grade distribution, (as
well as time of day, number of empty seats in the classroom, and male-vs.
female professor-vs. student, -- all individually, let alone collectively),
were able to overpower individual identity when it came to student evaluations
of faculty.
I never, ever,
referred to them as Student Evaluations of Faculty. I always referred to them
as “Student Perceptions”. I used them as ONE (and a minor one at that) of
many factors in evaluating faculty. One of the more valid, in my mind,
measures of faculty performance is feedback from 5-year+ alums. Although
delayed, such feedback says much more about the quality of “education”
than anything which could be generated contemporaneously. This is the major
reason for my contempt for “assessment programs” of the form in which they
are currently being promoted by the Asinine Administrators Compelling Sales of
Bullexcrement… (I may not have the full name of the organization completely
correct, since they recently changed their official moniker, but I’m hoping
everyone will forgive my mistake and go with the acronym.)
As always,
Argumentative,
Assertive, Contrary, Scathing, and Bullheaded,
David R. Fordham
PBGH Faculty Fellow
James Madison University
May 4, 2004 reply from Linda Kidwell
from the University of Niagara (visiting this year Down Under)
I stumbled into a different approach here in
Australia during my visiting year. There are percentage parameters for grade
distribution at some universities. For example only a small percentage can be
awarded HD (A), and there's a maximum percentage that can receive Fs. There's
essentially a bell curve expectation. I had a bit of trouble first term here
because my grade distribution was too high for the faculty guidelines.
I have mixed feelings about it. I consider it a
violation of academic freedom in part, though perhaps suggested guidelines are
good. And if I have a particularly good class, I don't want to artificially
lower their grades. On the other hand, it does take some of the grade pressure
off -- I never find myself tempted to curve a tough exam, and I don't
automatically round upward for those borderline grades. So it's a mixed bag!
What I'd like to see is a bit more concern over the
granting of latin honors in the US. When I was a student at Smith, only the
top 2 students earned Summa Cum Laude, the next 25 or so got Magna, and next
50 got Cum Laude (I'm guessing at the latter 2, but you get the idea). So you
really had to be among the best to earn it. At Niagara, my home institution,
it is based on GPA. In business we have tougher grading standards (tougher
courses too?) than other areas. As a result, a small percentage of our
business students earn latins, but a staggering 70% of the education majors
get them. Are all the brilliant students really in the school of education?
Every year at commencement the business and arts & science faculty roll
their eyes as those honors are announced. I think it cheapens the whole honor,
and it is unfair to students in the areas that don't inflate grades. It's also
unfair to those education students who really are top-flight.
Linda Kidwell
May 5, 2004 reply from Robert Holmes Glendale College [rcholmes@GLENDALE.CC.CA.US]
Some time ago I mentioned to the list that I agreed
to meet with some of the students in my on-line course for extra instruction.
At least one of you said that since not everyone could come to my office, I
was being unfair to the class by allowing the students who could come to my
office to have added help. I thought at the time how could I be unfair by
helping students? My school does not have a maximum or minimum limit on the
number of A's or B's we assign to students. We are expected to assign grades
based on mastery of the subject, not by rank in the class. When grades are
assigned by rank in the class, then giving one student the benefit of my time
and denying it to others is unfair. Those who can come to my office are better
able to beat the students who can not come. I do not like the idea of the
competitive model. I do not want to frustrate students who are eager for
learning because it is not fair to the rest of the class. I would much rather
see students helping each other to the benefit of both instead of withholding
knowledge in order to beat their classmates. It is probably easier to assign
grades when you just add up the points and the first X% get A's and so on, but
I would hope most of us know what we want the students to get from our
classes, and those who get it should be rewarded and those who don't get it
should not be rewarded, no matter how many of each are in a particular class.
As the college bound population grows, the "top" schools in the
country should be having more high quality applicants to choose from, and they
should find that more students are mastering the subject matter, and thus
receiving higher grades on average.
May 5, 2004 reply from Bob Jensen
Hi Robert,
As usual, you raised an interesting point. I think most of us are
accustomed to motivating our top students to reach for the stars. We
want to graduate students who can get into the top graduate schools, leading
CPA firms, top corporations, etc. We want to bring honors to our
university by watching students get outside honors such as Rhodes Scholarships
and medals for CPA examination scores.
One of the best ways to motivate top students is grade competition. Top
students generally strive for the top grade in a class and the highest gpa in
the college. But they may not strive any harder than it takes to get the top
grade in a class, at least that's what the studies from Duke, Washington, and
Princeton are telling us.
Now the Australian system that Linda Kidwell describes with a bell-curve
grade distribution and a limit of say 2% for that Highest Honors designation
is aimed at motivating the best students in the class to obtain the highest
honor possible on their transcripts. These top students work night and
day to earn their star designations.
Your grading system is not designed to motivate top students to be highest
honor students. There is no grade incentive for an exceptional student
in your class work any harder than it takes to earn the same grade with half
the effort that it takes an average student to work extra hours with you for
the same A grade.
But your system may have turned some student's life around, a student who
never thought it was possible to earn an A grade in an accounting class.
You have thus met what is probably your main goal as an educator. And
you have not achieved grade inflation by simply dumbing down your course.
I guess what we conclude from your system is that there are different
grading scales for different purposes. Perhaps there is more student
objection to grade inflation in the Ivy League schools because these students
are reaching for the highest stars required to gain entry into elite graduate
programs or some other elitist future where only the highest stars have an
entry opportunity.
Your A students, on the other hand, may have a longer-run shot at the top
because you helped coax them out of the starting gate.
I guess I can't find fault with this except that I
hope you kick ass when you encounter an exceptional student.
May 5, 2004 reply from Chuck Pier [texcap@HOTMAIL.COM]
As a follow-up to my commentary on the number vs.
letter grading system, when I first got to Appalachian State I was thrilled
that we used the + & - system because I felt I could provide
differentiation for the students and not lump the students with a acore of 80
with the students that scored an 89. However, what I have realized as I
approach the end of my second year here is that the more divisions we have in
the grading scale, the more boundary lines we create. The more boundary lines
we create, the more students are disappointed about missing the next level and
the more they will ask or pester you to help them. After all, "we are
only talking about a point or two!"
This time of the year is always the most stressful
for me. Does it get any better after we've been doing it for a while? (One of
David's rhetorical questions.) ;>)
Chuck
Charles A. Pier
Assistant Professor Department of Accounting
Walker College of Business Appalachian State University
Boone, NC 28608
email: pierca@appstate.edu
May 7, 2004 reply from Randy Elder [rjelder@SYR.EDU]
I've followed the
thread on grade inflation with much interest. It is a topic that I have great
interest in, and here are some observations.
1. Relation between
grades and evaluations - I think that the faculty perception that grades
influence evaluations is a much greater problem for grade inflation than the
actual relation, which I don't believe is that strong. An even greater problem
is that bad teachers use grading difficulty as an excuse for their
evaluations.
2. Student
evaluations - I also believe that we place way too much reliance on student
evaluations. Evaluations aren't going away, but there is minimal effort to
evaluate the actual effectiveness of teachers.
3. Grading policies -
Some of the discussion has focused on grading on the "curve". I find
that professors either grade using some sort of curve, or using a fixed
evaluation criteria. I much prefer the latter, as it does not place students
into competition with each other. More importantly, it allows students to
better know where they stand in the course, and attribute their performance to
their own effort. My courses always have a fixed number of points, and I
inform students of the minimum cutoffs for each grade level.
4. Sample exams - In
the Syracuse University Whitman School of Management, it is policy to make
some sample exam material available. The reason is to provide equal access, on
the assumption that there are old exams floating around in frat houses. The
theory is to give students an idea of the types of questions to be asked. I
also encourage students to use it as a diagnostic tool. Unfortunately, I
believe most students misuse the sample exams and focus on the answers, rather
than the knowledge to be tested.
5. Grading
information - At SU, we have historically not made much grading information
available, unlike my experience at public universities. We are moving toward
much greater availability of this information. I hope that this will eliminate
some posturing about grades (prof who claims to be tough but isn't; belief
that prof X gets good grades only because he grades easy, etc.) We also hope
to provide some grading guidelines that will serve to reduce some grade
inflation.
Randy Elder
Associate Professor and Director
Joseph I. Lubin School of Accounting
Martin J. Whitman School of Management
Syracuse University
Syracuse, NY 13244-2130
Email: rjelder@som.syr.edu
Phone: (315) 443-3359
Fax: (315) 443-5457
After I asked Randy to elaborate on his Point 5 above regarding grading
information disclosure, he replied as follows on May 10, 2004:
Bob,
Thanks for the compliment. I wasn't sure
that my remarks were that thoughtful as I was reading AECM messages on a LIFO
basis and discovered lots more good input on the subject after my post.
We do not make grade information available
to students. However, I believe it may be helpful to do so as it eliminates
misinformation that is passed around informally and on the web (you might want
to check out the site www.ratemysuclass.com).
This web site is spreading to other universities.
We make summarized grading information
available to department chairs to share with faculty. We have tried to focus
on courses by omitting faculty names. The accounting department has
established grading guidelines by course level, and I expect the School of
Management to do the same in the near future. I emphasize that these are
guidelines, and faculty can deviate from them.
I have been a strong advocate of having such
policies, and was influenced by my time as a doctoral student at Michigan
State, and year visit at Indiana. As a doctoral student, I wanted to make sure
that my grading conformed to grading by full-time faculty. I was directed to a
file that had a complete grading history for every course. At Indiana, the
department shared a 10-year grading history for every course. During my visit
at Indiana, the AIS department adopted grading guidelines that we modeled ours
after.
Randy
May 11, 2004 reply from Bob Jensen
Hi Randy,
I follow rate-my-class ( http://www.ratemysuclass.com/browse2.cfm?id=111
) only as a curiosity.
It is an illustration of the evils of self-selection and bias. Some
professors actually encourage selected students to send in evaluations.
Naturally these tend to be glowing evaluations.
Most courses reviewed suffer from self-selection bias of disgruntled
students. Most reviews tend to be negative. The number of students who send in
reviews is miniscule relative to the number who take the courses. I mean we're
talking about epsilon here!
Disgruntled students also seem to have a competition regarding who can
write the funniest disparaging review.
Fortunately, the site seems to be ignored where it counts.
Bob Jensen
May 12, 2004 reply from David R. Fordham [fordhadr@JMU.EDU]
Another one is:
www.ratemyprofessor.com
I use it as an example of how gullible people are...
taking Internet sites as Gospel without considering where the data comes
from...
David R. Fordham
PBGH Faculty Fellow
James Madison University
May 5, 2004 reply from Jagdish Gangolly [JGangolly@UAMAIL.ALBANY.EDU]
Bob,
I think it is important to provide incentives to be
the best. It is also important to provide incentives to be NOT at the bottom.
In the old days, at Cambridge University, at least in
the Mathematical Tripos, the students were graded into four classes: senior
wrangler (only one student could be this), wranglers, senior optimes, and
junior optimes. During the commencement, the student at the bottom of the
totem pole would be required to carry the "wooden spoon" (for a
picture of it click on http://www.damtp.cam.ac.uk/user/sjc1/selwyn/mathematics/spoon.html
), to signify that (s)he was good mainly for stirring the oats.
While draconian, the wooden spoon provided sufficient
incentives to the students not to be the one to carry it. The tragedy is that
nowadays many students might carry it with pride (to be called not-a-geek or
nerd).
Jagdish
May 5, 2004 reply from Bob Jensen
Hi Jagdish,
I loved the link at http://www.damtp.cam.ac.uk/user/sjc1/selwyn/mathematics/spoon.html
But I have one question:
Wooden spoon too quick
Make student much to sick
Wooden spoon too late
Make student out of date
Wooden spoon on time
Make student want to climb
Main question when I teach a goon
Where is it best to place that spoon?
Thanks,
Bob Jensen
May 5 reply from Jagdish Pathak
I find the very grades by
themselves faulty in the scenario of those schools where very best are chosen
to be privileged students, viz. ivy league ones. It is absolutely wrong to
have more than one grade in such schools in my view. All of us are aware that
these schools admit only the top rung of SAT and what value addition is done
in four years by the school, if these students come out lesser than 'A' grade?
I believe there is a way to differentiate these all potential 'As' and that is
by differentiating 'A' grade itself. The very best or the top 5-10% may
automatically would acquire AAA, the major middle group would acquire 'AA' and
the rest minority may get 'A'.There can be a theoretical provision for a 'B'
or 'F' which will be a 'B' or 'F' like anywhere else and student may attempt
in only one additional chance to make it into higher AAA or AA or A grade.
How does it sound? Please forgive me if I have sounded a bit judgmental.
Jagdish Pathak, PhD
Assistant Professor of Accounting Systems
Accounting & Audit Area
Odette School of Business
University of Windsor
401 Sunset
Windsor, N9B 3P4, ON
Canada
May 5, 2004 reply from Bob Jensen
Hi
Jagdish,
I
think a “rose by any other name is a rose.”
I’m
not certain whether AAA/AA/A/B/C/D/F is much different that 6/5/4/3/2/1/0 in
the eyes a student in a class. An
ordinal ranking with seven categories is an ordinal ranking with seven
categories by any other name.
Other
ordinal rankings by any other name may be somewhat different.
Whether ranks have two scales (P/F), three scales (H/M/L), five scales
(A/B/C/D/F) or a ranking of N students (1/2/3/…/N) changes the nature of the
competition. The more ranking
categories, the more intense the competition becomes to get the highest
possible grade. For example, in
the U.S. Military Academy, the top ranking graduates down to the bottom
ranking graduates are all determined, and this makes for some intense
competition to be the top graduate (although the lower ten prospective
graduates may decide to compete in a race to the bottom just for the
distinction of being last after earning a decent rank becomes hopeless).
Another
problem is one of aggregation across courses. For
example, an ordinal scale of A/B/C/D/F becomes a cardinal scale carried out to
two decimal points when we transform a set of grades into a something like a
gpa = 3.47. We have thereby
created a cardinal way to rank graduates on a continuum when the inputs to the
cardinal outcomes are only ordinal A/B/C/D.F grades for every course.
Students
are most interested in how rankings affect them in later life. For
example, suppose Big Four accounting firms will only interview students with a
gpa of 3.30 or above. In that
case, weaker students will advocate more grade inflation so they can make the
cut. Top students will advocate
grade deflation so that the pool of students having a gpa higher than 3.30
smaller. For example, suppose
grade deflation leaves a pool of 10 qualified graduates whereas grade
inflation leaves a pool of 40 qualified graduates.
If only nine winners are going to be chosen from the pool, then top
students have better odds with grade deflation.
One
problem we are having at the K-12 level, is that students are aspiring for
less. I will forward Steve
Curry’s opinion on this.
Bob Jensen
May 5, 2004 reply from Steve Curry
The five letter grades were supposed to be a scale
with C meaning average. A and B were above average, D and F were below
average. The youth and college kids I work with at church are not interested
in this scale. (Nor the related 100-point scale, nor the 4.0 GPA scale.) The
parents want the A, the kids themselves are much more in the pass/fail
mindset. It’s like the joke what do you call someone who graduated at the
bottom of the class in medical school? Doctor. Whether this is an overall
societal trend, I cannot say. It may be useful to find out. If so, our
evaluations of them and their evaluations of us need to change.
When the mandatory faculty evaluations were
introduced back in 1987, I heard one professor argue that there should only be
one question: “Did you learn anything?” From what I’ve seen in the teens
I know, this simple evaluation is what they want. When the pass/fail kids
become the pass/fail parents and teachers, the various scaled systems may not
survive. If change is to occur, it will be long and painful.
Another question arises: How important is evaluation
in the first place? Certainly education that is preparing students for life
needs to evaluate whether the student has learned what is necessary but what
about the part of education that is learning for learning’s sake? Someone
who wants to become a banker certainly needs to be taught amortization and
there needs to be an evaluation to see if they understand the concept and its
application before they are certified. But is it really necessary to evaluate
a person who takes a history course simply because they love the story?
Evaluating the former is easy. Give them some numbers and see if they get it
right (pass/fail). The latter is more difficult. Which details does the
instructor think are important? This subjectivity lends itself more to a
scaled evaluation but the basic question is if evaluation is even necessary at
all. Back to the simple question “Did you learn anything?”
All this may help explain the rise of technical
training in our society where you either get the certificate or you don’t.
Maybe Career Services may have some insight as to whether campus recruiters
even look at the transcript. In my first job out of college, the phone company
never requested a transcript, they just asked if I had a degree. Have our
recent graduates encountered the same?
That we even have a concern over grade deflation (a
few years ago we were discussing grade inflation and the Lake Wobegon Effect)
draws into question the credibility of our current evaluation system in the
first place. If average truly is average then the average grade should have
been, should be, and should always be a C. If it isn’t, this suggests the
evaluation system is not accurate or impartial. It also implies it is not
fair.
Stephen Curry Stephen.Curry@Trinity.edu
Information Technology Services Phone: 210-999-7445
Trinity University http://www.trinity.edu\scurry
One Trinity Place
San Antonio, Texas 78212-7200
May 5, 2004 response from akonstam@trinity.edu
I have never understood faculty be interested in
having lower grades in the class. Grade inflation might be caused by:
1. Better students. Should not the better students
at Harvard get better grades. When we change our average student SAT from
1000 to 1250 should they not get better grades.
2. Maybe teaching and teaching tools have become
more effective.
3. Are all courses equally hard and should they be.
Do we really think art courses and calculus courses need to be equally
difficult?
With deference to Bob Jensen's studies their are two
many variables in producing better grades to pin down the cause effectively.
Aaron Konstam
Computer Science Trinity University
One Trinity Place. San Antonio, TX 78212-7200
May 5, 2004 reply from Bob Jensen
Aaron wrote the following:
****************
1. Better students. Should not
the better students at Harvard get better grades. When we change our average
student SAT from 1000 to 1250 should they not get better grades.
**************
Hi Aaron,
I think your argument overlooks the fact that the
people raising the most hell over grade inflation are the best students
currently enrolled in our universities, especially students in the Ivy League
universities. If 50% of the
students get A grades at Harvard, the Harvard grade average becomes irrelevant
when Harvard graduates are attempting to get into law, medical, and other
graduate schools at Harvard and the other Ivy League graduate schools.
Virtually all the applicants have A grades.
Where do admissions gatekeepers go from there in an effort to find the
best of the best?
The uproar from top students at
Princeton
was a major factor leading to
Princeton
's decision to put a cap on the proportion of A grades.
Some years back the Stanford Graduate School of
Business succumbed to pressures from top MBA students to cap the highest
grades in courses to 15% of each class. This became known as the Van
Horne Cap when I was visiting at Stanford (Jim Van Horne was then the
Associate Dean). The reason the
top students were upset by grade inflation was that they were not being
recognized as being the best of the best in order to land $150,000 starting
salaries in the top consulting firms of the world.
Those consulting firms wanted the top 10% of the graduates tagged
"prime-grade" for market by Stanford professors.
(Recruiters also complained that all letters of recommendation, even
those for weaker students, were too glowing to be of much use.
This is partly due to fear of lawsuits, but it's also a cop out.)
*******************
And now, a new report prepared by the
American
Academy
of Arts & Sciences says it's time to put an end to grade inflation.
"Deflating the easy 'A'," by Teresa Méndez,
Christian Science Monitor,
May 4, 2004
--- http://www.csmonitor.com/2004/0504/p12s02-legn.html
*******************
May 6, 2004 message from Paul Fisher [PFisher@ROGUECC.EDU]
The BBC did a small piece on the four-minute mile
this morning. It is interesting that 30-40 years ago that barrier was thought
to be impossible to break, yet now runners are not considered
"world-class" unless they can do so regularly. Does that mean our
tracks are shorter? Stopwatches slower?
We should be improving our instructing ability and
our students grades should be reflecting that. I know that my courses are
taught much better today than twenty years ago, and I would be surprised if
any instructor would say that their teaching skills have degraded over the
years.
That does not mean I don't see the internal problems
with SAT and other measurements that may inhibit student learning, yet
maintain instructor status.
Paul
May 6, 2004 reply from Bob Jensen
Hi Paul,
You said:
*****************************
"We should be improving our instructing ability and our students'
grades should be reflecting that. I know that my courses are taught much
better today than twenty years ago, and I would be surprised if any
instructor would say that their teaching skills have degraded over the
years."
****************************
Near the bottom of this message you will read a less optimistic quote from
Ohio State University:
***************************
The massive number of undergraduates who are effectively illiterate (and
innumerate) leads to a general dumbing down of the curriculum, certainly the
humanities curriculum.
***************************
It is absolutely clear that we are not "improving our instructing
ability" in K-12 education where our TV-generation graduates are on a
race for the bottom and are demonstrating an immense lack of motivation in
public schools. They are winning a speed test in terms of hours spent in class
(maybe 4-5 hours) per day vis-à-vis my school days when we spent nearly eight
hours per day (8:00-12:00 a.m. and 1:00-4:30 p.m.) in class minus two recess
breaks.
NB:
Especially note the last paragraph at the bottom of this message which
compares
U.S.
versus Japanese school children. The
last line reads "A little Japanese respect for hard work might work
wonders for this generation of American slackers who refuse to recognize their
own ignorance with anything other than praise."
It is also doubtful for our college graduates when employers tell us how
badly communication skills have declined in our graduates, especially grammar
and creative writing skills of the TV-generation. I think the media has
greatly expanded student superficial knowledge about a lot of things, but so
much of it seems so shallow. Ask your college's older writing composition
instructors if writing skills have improved over the years? Ask the
instructor's in the basic math/stat course if math skills have improved?
I think that more of our graduates might be able to run the four-minute
mile, and their term papers may be equally fast-paced Google pastes that set
speed records but not quality records.
How well do you think our college graduates would do on this supposed 1895
test for eighth graders --- http://skyways.lib.ks.us/kansas/genweb/ottawa/exam.html
If you get a chance, compare the reading book currently used in the fifth
grade of your school district with the turn-of-the-century McGuffey Reader
---- http://omega.cohums.ohio-state.edu/mailing_lists/CLA-L/1999/12/0092.php
The recent anecdotes about the inability of
undergraduates to read what grade school students used to read before WW
II should hardly come as a surprise. The new 1998 NAEP writing
assessments, how available at the National Center for Educational
Statistics, show in correlation with the reading assessments that the
majority of US students lack the skills for reading any advanced
literature.
In his press release, Gary W. Phillips, the
Acting Commissioner for the NCES, stated that the average or typical US
student is not a proficient writer (where "proficient" is a
descriptive skill category of the NAEP) and has only partial mastery of
the knowledge and skills required for solid academic performance in
writing. This is true, he noted, at the national level for all three
grades (4th, 8th and 12th). Only 25% had reached the proficient
achievement level, while a mere 1% in each grade had reached the advanced
achievement level. I note that the skills required for basic, proficient
and advanced are very generous. By the English standards of a century ago,
"advanced" would probably not even qualify for
"basic."
Here is a summary of the percentage of students
at or above each achievement level by gender:
Gender Advanced Proficient Basic
Male 0 14 70 Female 1 29 86.
The discrepancy between male and female
proficiency should ring alarm bells throughout the educational world. The
gap here nearly guarantees poor male performance at the university. As a
gross description, the data show that 23-38 percent of US students fall
below grade level in writing. If one compares the writing assessments with
the reading assessments, a fairly close correspondence between the two is
evident. Here is a summary of the percentage of students at or above each
achievement level in reading by year of assessment:
Year Advanced Proficient Basic
98 6 40 77 94 4 36 75 92 4 40 80.
What this tells us is what everyone who teaches
writing knows quite well: writing is a form of book talk. Failure in
reading assures failure in writing.
It is, as a consequence, hopeless to tackle the
writing problem without first solving the reading problem. Indeed, I'm
quite confident that a massive improvement in reading skills would, by
itself, produce a significant improvement in writing skills. The NAEP
assessments suggest modest improvement in reading at the fourth grade
level (though skewed by the failure of some states to include the results
from students with learning disabilities), but they are far too small for
the enormous amount of money that has been spent to improve the skill.
Since private schools consistently outperform public schools by a large
margin at all grade levels in both reading and writing assessments, there
are clear advantages in relative freedom from the educational bureaucracy
and greater control over discipline and content. It is very unlikely, in
my opinion, that the public schools will ever work very well unless the
socio-economic disparity between the poor and the middle class (shrinking
though it is) can be eliminated or at least reduced. The NAEP results show
another important correspondence, that between parental education and
writing skill. Parents with a college degree impart more social
capital--including discipline and higher expectations--to their children
than parents with only a high school degree or no degree.
The massive number of undergraduates who are
effectively illiterate (and innumerate) leads to a general dumbing down of
the curriculum, certainly the humanities curriculum. Heroic efforts must
be made simply to convey the semantic meaning of a passage children once
read in McGuffy's Reader.
A healthy respect for their own deficiencies coupled with the will to
learn and a relentless courage to fight through to understanding would
help these weak students enormously. Unfortunately, a very large
proportion are simply disengaged from any kind of serious, disciplined and
steady application to studies as a study by UCLA's Higher Education
Research Institute shows (_The American Freshman: National Norms for Fall
1995_, ed. Sax et al. (Los Angeles: HERS, 1995)). More and more students
entering college have spent less time at homework than ever before, talked
less to teachers outside class, participated less actively in clubs and
visited a teacher's home less frequently. They want everything presented
to them in an easily graspable, attractive package--like a TV sitcom. Many
claim to be bored in class and are hostile to long or complex reading
assignments (whole classes indeed will revolt on occasion), but expect
good grades for mediocre work. The alienated and disengaged are often
proud of their ignorance. A student who claims to have read all of Othello
I.i, which is after all a very modest assignment, without understanding a
word of it has not availed himself of a good annotated edition, of
dictionaries and of references works. He also lacks a decent sense of
shame. More significantly, he hasn't displayed the will to keep working at
the scene until some understanding breaks clear.
In all my years of teaching Shakespeare at the
undergraduate and graduate levels, as in my years teaching him in high
school, I never encountered such a completely blank mind. Certainly not in
Japan, where I'm currently teaching a seminar in Shakespeare with students
who labor unremittingly to follow the syntax and meaning. A little
Japanese respect for hard work might work wonders for this generation of
American slackers who refuse to recognize their own ignorance with
anything other than praise.
As
far as the laws of mathematics refer to reality, they are not certain; and as
far as they are certain, they do not refer to reality.
Albert Einstein.
I suspect this quote could easily be modified to apply to academic accounting
research.
How could a school
district be unaware of such an important law? The law itself is probably a
poor law that will ultimately turn the Algebra course into a color-the-equation
course for students not bound for college. The fact of the matter is that
the algebra coloring books could just not be printed by the time the law went
into effect.
May 2, 2004 message
from Dr. Mark H. Shapiro [mshapiro@irascibleprofessor.com]
The
Los Angeles Times
recently reported that some 200 school districts in California had been
granted waivers from the new graduation requirement that compels every high
school student in the "golden state" to pass Algebra 1 before
receiving his or her diploma. The school districts that were granted waivers
complained that they were unaware of the new law, and that it would be unfair
to penalize their students who were about to graduate because of the failings
of these districts.
For
students not interested in going on to college, wouldn’t it be better to
substitute the Algebra course for a course combining Excel financial functions
with the basic mathematics of finance so that students would understand how
interest rates are calculated on loans and the basics of how they might be
cheated by lenders, investment advisors (read that mutual fund advisors), and
employers? For those students, the best thing they could learn in my opinion is
at http://faculty.trinity.edu/rjensen/FraudDealers.htm
The course could also include some basic income tax fundamentals like interest
and property tax deductions and the calculations of after-tax costs of home
ownership and the senseless cost of purchasing vehicles you cannot afford.
Students
who change their minds, after graduation, and decide to go on to college will
just have to pick up the Algebra later on when they have perhaps matured enough
to see some relevance of algebra and other mathematics courses in their
education. I was an
Iowa
farm boy who did not take
calculus, linear algebra, differential equations, finite mathematics, and
mathematical programming until I was in a doctoral program. This
turned out to be a brilliant move, because I looked like a genius to some of my
competitors in the program who forgot much of the mathematics they studied
years earlier and had long forgotten. For
example, one of our statistics qualifying examination questions in the doctoral
program required integrating the normal distribution (not an easy thing to do)
by shifting to polar coordinates. I
looked brilliant because I’d only recently learned how to integrate with
polar coordinates. My engineering
counterparts had long forgotten about polar coordinates --- http://mathworld.wolfram.com/PolarCoordinates.html
But
please, please do not ask me anything today about polar coordinates? Many
things learned in doctoral programs are not relevant to life later on.
Bob
Jensen
May
3, 2004 reply from Patricia Doherty [pdoherty@BU.EDU]
-----Original
Message-----
From: Patricia Doherty
Sent: Monday, May 03, 2004 9:12 AM
Subject: Re: Mathematics versus Reality versus Curriculum
"…wouldn't it
be better to substitute the Algebra course for a course combining Excel
financial functions with the basic mathematics of finance so that students
would understand how interest rates are calculated on loans and the basics of
how they might be cheated by lenders, investment advisors (read that mutual
fund advisors), and employers? …"
In order to
understand these, a student needs many of the concepts taught in Algebra I,
such as the way equations work. Algebra I is really a pretty basic math course
where they spend a lot of the first months reviewing basic math like fractions
and decimals. These seem to me like things students need to understand
spreadsheets and compound interest. Perhaps a DIFFERENT algebra course should
be offered for those who are college-bound, and those who may not be. The
latter would take a course more oriented to the "practical" needs
you cite, whereas the former (who also, by the way, need these things) would
take a more challenging, accelerated course, more along the lines of the
Algebra I you are probably thinking of.
p
I love being married.
It's so great to find that one special person you want to annoy for the rest
of your life. Author unknown.
Patricia A. Doherty
Instructor in Accounting Coordinator,
Managerial Accounting
Boston University School of Management
595 Commonwealth Avenue Boston, MA 02215
May 3, 2004 reply from Bob Jensen
Hi Pat,
Actually, I found that by using
Excel's financial functions my students grasp the concepts and the models
before they learn about the underlying equations. They are deriving
amortization schedules and checking out automobile financing advertisements
long before they must finally study the underlying mathematical derivations.
When we eventually derive the
equations, the mathematics makes more sense to the students. Sometimes they
claim that they understood it better before learning about the math. It's a
little like learning to appreciate poetry before delving into such things as
meter and iambic pentameter --- http://www.sp.uconn.edu/~mwh95001/iambic.html
I'm not sure at the first-course
level in high school that it is really necessary to delve under the hood and
understand the equations like we teach them in college. I certainly don't
think that many high school students who never intend to go to college get
much out of learning how to solve quadratic equations and other topics in
Algebra 1. They have less interest because they don't see much use to them
unless they are proceeding on to calculus and college.
Thanks,
Bob
May 4, 2004 reply from Gadal, Damian [DGADAL@CI.SANTA-BARBARA.CA.US]
-----Original
Message-----
From: Gadal, Damian
Sent: Tuesday, May 04, 2004 9:07 AM
Subject: Re: Mathematics versus Reality versus Curriculum
I thought about this
most of last night, and what I've been advocating is not failing our youth.
That to me means not dumbing down our education system.
The car analogy
doesn't work for me, as cars were engineered with end-users in mind, as were
phones, computers, radios, televisions, etc.
I don't think we
should put the roof on the house before building the foundation.
DPG
Waterfront Accounting
May 4 reply from Bob Jensen
I think the real distinction is
whether you think failure to require Algebra I for all students is necessarily
dumbing down the entire education system. Many nations (especially in Germany
and Japan) have flexible educational curricula to serve different needs of
different students.
Alternative curricula may be equally
challenging without being a "dumbing down." Dumbing down
arises when a course in any given curriculum is made easier and easier just so
more students can pass the course.
Having alternative courses is not in
and of itself a "dumbing down." For example, replacing Algebra I
with "foundations of the mathematics of finance" or
"foundations of music composition" would not necessarily be "dumbing
down." Dumbing down any given course means
taking the hard stuff out so that more students can pass. Replacing one hard
course with another hard course is not dumbing down and may improve education
because the alternate curriculum is more motivating to the student.
If you want to read more about how to
"dumb down" math couses, go to http://www.intres.com/math/
***********************************************
The Old
Adobe Union School District in Petaluma, California has adopted a
new math program: MathLand.
The net result of this action is to dumb-down
the math curriculum and turn the math program into a math appreciation
program. This site is dedicated to informing parents in Petaluma, California
about the issues involved.
Children grow
older and the protest continues against the use of the CPM
Algebra I program being used at Kenilworth Junior High of the Petaluma Joint
Unified District. This program is so deficient it doesn't cover even half of
the California State Content Standards for Algebra I.
**************************************
May 2, 2004 reply from Michael O'Neil,
CPA Adjunct Prof. Weber [Marine8105@AOL.COM]
As a teacher of
Algebra A (yes, Algebra A: the first half of Algebra I) I can tell you that
you do not even know how bad it is in public schools. I am also a CPA and
teach an accounting and consumer finance class in high school. Yes, I fail
most of my students. Most of my Algebra A students have already failed
Pre-Algebra. They are very lazy, and given their low academic level, many of
them are discipline problems.
Despite having
standards and trying to TEACH them the material I was not given tenure and
then told flat out by the principal (a young man with little teaching
experience) that he did not have to give me a reason, and he would not give me
a reason. This despite my yearly evaluation having no negative
areas--satisfactory in all areas.
California will let
schools use accounting as a math class but will not give me credit toward my
Math credential. So in theory it might be that in a school accounting would be
a 12th grade class, and I would not be able to teach it, despite a MPAcc and
CPA.
It will be
interesting when schools show a high pass rate in Algebra I and no correlation
to the Exit exam.
Mike ONeil
May 5, 2004 reply from XXXXX
I won't even start the story of what the Headmaster
told me about the Cs in my Spanish class I gave to three students missing most
of the semester due to their parents' taking them on repeated ski trips to
Colorado and the students not only not turning in assigned-the-week-before
homework, but clearly (matching their tests to the key) failing two of the
three exams in the class. My Cs were not even honest in regards to cumulative
work done, and pushing the packet.
These students, according to the Headmaster, needed
at least Bs in the class, for reasons I did not need to know. I discovered,
after that reason was given that these parents were funders of the new gym and
were pledged to give more. Keep in mind that this private school, in (City
X), was and still is known for having more students test
higher on SATs than other private schools in town. This school also requires 5
years of Latin to get out, and it's a joke to see the helpless ones struggle
with Latin the first time (of course never having taken a foreign language in
school before) when their rich parents transfer them in from other private
schools or HISD to begin to learn Latin and keep a required B in those classes
to graduate.
Their parents whine that the kids are having too much
homework, etc. What a mess. And that was one of the very best schools (City
X) had/has to offer. I, needless to say, did not return
to teach there the next year. And to teach in HISD, although teachers are
needed, requires a handgun license and proficiency in martial arts as well as
private bodyguard just to be defended against the classroom population. This
week's NewYorker has such a cartoon (copy over at the library;
hysterical).
Bob, thanks for letting me vent here. Community
colleges offer some hope, but there is such a time delay because of remedial
work needed. Home schooling early might work in some cases. And to think these
people are our country's future leaders. In closing, I certainly know that it
is more difficult to learn as an adult than as a child or adolescent...
Happy Wednesday...
Very best,
XXXXX
As I said previously, great teachers come in about as many varieties as
flowers. Click on the link below to read about some of the varieties
recalled by students from their high school days. I t should be noted that
"favorite teacher" is not synonymous with "learned the
most." Favorite teachers are often great at entertaining and/or
motivating. Favorite teachers often make learning fun in a variety of
ways.
However, students may actually learn the most from pretty dull teachers with
high standards and demanding assignments and exams. Also dull teachers may
also be the dedicated souls who are willing to spend extra time in one-on-one
sessions or extra-hour tutorials that ultimately have an enormous impact on
mastery of the course. And then there are teachers who are not so
entertaining and do not spend much time face-to-face that are winners because
they have developed learning materials that far exceed other teachers in terms
of student learning because of those materials.
The recollections below tend to lean toward entertainment and "fun"
teachers, but you must keep in mind that these were written after-the-fact by
former high school teachers. In high school, dull teachers tend not to be
popular before or after the fact. This is not
always the case when former students recall their college professors.
"'A dozen roses to my favorite teacher," The Philadelphia Enquirer,
November 30, 2004 --- http://www.philly.com/mld/inquirer/news/special_packages/phillycom_teases/10304831.htm?1
What works in education?
Perhaps Colleges Should Think About This
"School Ups Grade by Going Online," by Cyrus Farivar, Wired News,
October 12, 2004 --- http://www.wired.com/news/culture/0,1284,65266,00.html?tw=newsletter_topstories_html
Until last year, Walt Whitman Middle School 246 in
Brooklyn was considered a failing school by the state of New York.
But with the help of a program called HIPSchools
that uses rapid communication between parents and teachers through e-mail and
voice mail, M.S.
246 has had a dramatic turnaround. The premise behind "HIP"
comes from Keys Technology Group's mission of "helping involve
parents."
The school has seen distinct improvement in the
performance of its 1300 students, as well as regular attendance, which has
risen to 98 percent (an increase of over 10 percent) in the last two years
according to Georgine Brown-Thompson, academic intervention services
coordinator at M.S. 246.
Continued in the article
Work Experience Substitutes for College Credits
Bob Jensen cannot support an initiative to grant college credit for work
experience
The proposal also said Pennsylvania officials would
explore the creation of a centralized body that would try to commonly assess and
define what kinds of work experience should qualify for credit, to ease the
transfer of credit for such work among colleges in the commonwealth . . . Peter
Stokes, executive vice president at Eduventures, an education research firm,
agreed that policies that make it easier for workers to translate their previous
work experience into academic credit can go a long way in encouraging mid-career
workers who might be daunted by the prospect of entering college for the first
time. “For someone who’s been in the work force for 10 or 15 years, it can be a
lot less scary if the college or university you’re enrolling in can tell you
that you’re already halfway there, or a third of the way there,” Stokes said.
Doug Lederman, "Work Experience for College Credit," Inside Higher Ed,
August 14, 2006 ---
http://www.insidehighered.com/news/2006/08/14/pennsylvania
An Old Fudd's Comment
Everybody has life experience, much of which may be more educational than
passage of college courses or studying for qualification examinations. I just
don't think it's possible to fairly assess this without at least having
qualifying examinations for waiving courses. It may be possible to have
qualifying examinations that allow certain courses to be replaced by other
courses in a curriculum plan that recognizes that a student has sufficient
knowledge for advanced courses. Maybe I'm just old fashioned, but I think that
the total number of course credits required for a degree should be lowered by
life experience or qualifying examinations. Students should earn their credits
in onsite or online courses that, hopefully, entail interactive learning between
students and both instructors and other students.
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
"College Credit for Life Experience: 2 Groups Offer Assessment Services,"
by David Glenn, Chronicle of Higher Education, May 16, 2011 ---
http://chronicle.com/article/Will-Work-for-Credit/127564/
Add one more thing to the list of tasks that
colleges can outsource.
This time, it's assessing "experiential
learning"—that is, the skills students have gained in the workplace and
other life trials—and determining how many credit hours should be awarded
for that learning. Two fledgling organizations are game.
The idea of handing such decisions to outsiders
might make some faculty members wince. But the services' creators say that
their networks of portfolio evaluators will establish national norms that
will make experiential-learning assessment more clear-cut, rigorous, and
credible. And as the concept gains legitimacy, they say, it could help
hundreds of thousands of people complete college.
"We're taking baby steps with our first 50 or 60
students," says Pamela Tate, president and chief executive of the Council
for Adult and Experiential Learning. "As this comes to scale, I hope that it
will have an enormous impact." Ms. Tate's organization, known as CAEL, is
the driving force behind
Learning Counts, the larger of the two projects.
The second portal, which went online only two weeks
ago, is KNEXT,
a for-profit corporate sibling of Kaplan University.
"This is absolutely the right thing to do for adult students," says KNEXT's
vice president, Brian Oullette. "College-level learning is college-level
learning, regardless of where it's acquired. Adult students deserve to have
that learning recognized and transcripted and to have it count toward a
college degree."
The two services have roughly the same design. Both
of them primarily focus on adult workers who earned a significant number of
college credits years ago but who, for whatever reason, never finished a
degree.
Each service offers interested students a free
telephone-advising session to determine whether their workplace learning
might warrant course credit. Students who pass that threshold are invited to
sign up for an online course that will teach them to prepare portfolios that
reflect their experiential learning. (Each subject area for which the
student wants credit—say, computer science or management or
communications—gets a separate portfolio.) Those portfolios are then
submitted to an evaluator from a national panel of subject-matter experts,
who deems the portfolio worthy (or not) of course credit.
More than 80 colleges have signed up as Learning
Counts
pilot institutions since the service officially
opened its doors in January. Those pilot colleges have pledged to accept the
credit recommendations of the national evaluators, and they have agreed to
award students three credit hours for successfully completing the
portfolio-creation course itself.
KNEXT, meanwhile, has only a handful of
participating colleges at this early date. Beyond Kaplan itself, only
Grantham University and the New England College of Business and Finance have
signed articulation agreements. Mr. Oullette says the project is
aggressively seeking more partners.
In both systems, students are free to submit their
completed portfolios to nonparticipating colleges—but in such cases there is
no guarantee that any course credit will be awarded.
Show, Don't Tell
One of the Learning Counts pilot institutions is
Saint Leo University, in Florida. That institution had a longstanding
program for awarding credit for experiential learning. But its president,
Arthur F. Kirk Jr., says the new national system should be much more
efficient and transparent.
Continued in article
Jensen Comment
All to often colleges give credit for life/work experience as a marketing
strategy to attract students into degree programs. The intent is marketing
rather than academics. I'm against all programs giving college credit for
life/work experience that are not competency based, meaning that CLEP-type
examinations should be administered to assess whether applicants have truly
mastered the course content for which college credit is being given without
having to take college courses.
Certification (Licensing) Examinations
Certification Examinations Serve Two Purposes: One is to screen
for quality and the other is to put up a barrier to entry to keep a
profession from being flooded
The California test (BAR exam for
lawyers), by all accounts, is tough. It lasts three
days, as compared with two or 2½-day exams in most states. Only one state --
Delaware -- has a higher minimum passing score. According to the National
Conference of Bar Examiners, just 44% of those taking the California bar in 2004
passed the exam, the lowest percentage in the country, versus a national average
of 64% . . . Critics say the test is capricious, unreliable and a poor measure
of future lawyering skills. Some also complain that California's system serves
to protect the state's lawyers by excluding competition from out-of-state
attorneys. There has been some loosening of the rules. California adopted rules
last year permitting certain classes of lawyers to practice in the state without
having to take the bar.
"Raising the Bar: Even Top Lawyers Fail California Exam," by James Bandler and
Nathan Koppel, December 5, 2005; Page A1 ---
http://online.wsj.com/article/SB113374619258513723.html?mod=todays_us_page_one
Jensen Comment:
Unlike the BAR exam, the CPA examination is a national examination with uniform
grading standards for all 50 states, even though other licensure requirements
vary from state to state. Also the CPA examination allows students to pass
part of the exam while allowing them to retake other parts on future
examinations. Recently the CPA examination became a computerized
examination (will both objective and essay/problem components). This may
change performance scores somewhat relative to the data presented below.
You can read the following at
http://www.cpaexcel.com/candidates/performance.html
National Average Pass Rates
The National Association of State Boards of Accountancy (NASBA) publishes an
Annual Report Entitled "Candidate Performance on the Uniform CPA
Examination." Annual data since 1998 typically showed that, for each exam
held since that year:
- Only about 12% of all candidates passed all 4
exam parts
- 58% of first time candidates did not pass any
exam part
- 46% of repeat candidates did not pass any exam
part
Student Pass Rates at Top Colleges, per
NASBA, May 2004 Edition:
- Top 10 colleges, students without advanced
degrees 40.78% average
- Top 10 colleges, students with advanced
degrees 65.53% average
"The 33 Whitest Jobs In America," by Derek Thompson, The Atlantic,
November 6, 2013 ---
http://www.theatlantic.com/business/archive/2013/11/the-workforce-is-even-more-divided-by-race-than-you-think/281175/
Academe does not appear in the
90%+ white chart. Perhaps this is because academic disciplines vary
so much in terms of having minority professors --- especially in disciplines
(like mathematics and accounting) having increasing proportions of Asian
Americans but not African Americans and Latinos. Also academe is confounded by
having "minorities" who are still on Green Cards and are otherwise non-native
Americans. Although the proportion of white professors of accounting is
declining due mostly to a growing number of Asian accounting professors, the
proportion African American and Latino accounting professors is miserably low.
The KPMG Foundation for decades has taken on a serious funding initiative to
increase the number of African Americans in accountancy doctoral programs. But
the number of graduates is still a drop in the proverbial bucket.
"Whatever Happened to All Those Plans to Hire
More Minority Professors?" by Ben Gose, Chronicle of Higher Education,
September 26, 2008
http://chronicle.com/weekly/v55/i05/05b00101.htm?utm_source=at&utm_medium=en
CPA firms increased their hiring of minorities to over 30% at the entry
level, but the retention level drops back down to the neighborhood of 20% ---
http://www.journalofaccountancy.com/Issues/2012/Jun/20114925.htm
Reasons for lower retention rates include failure of new hires to pass the CPA
Examination after being hired. Another perhaps more important reason is the
traditionally high turnover of more recent employees in the larger CPA firms
where most of those employees move into higher paying jobs (often with clients)
or move out of the labor force to become full-time parents. Top minority
employees of CPA firms are especially likely to receive attractive job offers
from clients.
Law schools have been especially aggressive in recruiting top African
American and Latino students.
This competition especially hurts when recruiting minority students for masters
programs in accountancy (most CPA Examination candidates now graduate from such
masters programs). One reason for law school minority recruitment success is
that students can major in virtually any discipline in college and later be
admitted to law school if they have the required LSAT scores. Most masters of
accounting programs require what is tantamount to an undergraduate accounting
major. This greatly reduces the number of minorities eligible to take the CPA
Examination. However, students can still be business accountants without having
passed the CPA examination. It's much harder, however, to get entry-level
experience without first working for either a CPA firm or the IRS.
Occupations with tough licensing examinations tend to have lower lower
percentages of blacks and Latinos.
More than half of the black and Latino students who
take the state teacher licensing exam in Massachusetts fail, at rates that are
high enough that many minority college students
are starting to avoid teacher training programs,
The Boston Globe reported. The failure rates
are 54 percent (black), 52 percent (Latino) and 23 percent (white).
Inside Higher Ed, August 20, 2007 ---
http://www.insidehighered.com/news/2007/08/20/qt
Bob Jensen's threads on careers ---
http://faculty.trinity.edu/rjensen/Bookbob1.htm#careers
Life in Our Litigious Society
If attendance alone does not guarantee a passing grade, sue the school?
This is from Karen Alpert's FinanceMusings Blog on August 23, 2006 ---
http://financemusings.blogspot.com/
Finally, I'd like to mention a
piece from Online Opinion about education as a
consumer good. It talks about a legal settlement between a secondary school
in Melbourne and the parents of a student who did not learn to read
properly.
Those in the know have warned that this case could
result in an education system burdened by increased litigation by
parents against schools, with schools having to be very careful about
how they promote their standard of teaching to parents of future
students. Not only does the case highlight that education is becoming an
area of focus in an increasingly litigious society, but that on a
broader level education - at whatever level - has become little more
than a product for sale in the market for knowledge and training.
While the case at hand involved a secondary school,
I can easily see it applied to tertiary institutions; especially in the case
of full fee paying students. Some students already seem to think that
attendance should guarantee a passing grade. While I believe that certain
pedagogical standards must be met, students must participate in their own
education. Those who are not willing to work toward understanding and
learning should not be handed a degree. (Say
what?)
Jensen Comment
I think Karen's a party poop!
Bob Jensen's threads on higher education controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm
Peer Review in Which Reviewer Comments are Shared With the World
I think this policy motivates journal article referees to be more
responsible and accountable!
Questions
Is this the beginning of the end for the traditional refereeing process
of academic journals?
Could this be the death knell of the huge
SSRN commercial
business that blocks sharing of academic working papers unless readers
and libraries pay?
"Nature editors start online peer review," PhysOrg, September 14, 2006
---
http://physorg.com/news77452540.html
Editors of the prestigious scientific journal
Nature have reportedly embarked on an experiment of their own: adding an
online peer review process.
Articles currently submitted for publication in the
journal are subjected to review by several experts in a specific field, The
Wall Street Journal reported. But now editors at the 136-year-old Nature
have proposed a new system for authors who agree to participate: posting the
paper online and inviting scientists in the field to submit comments
approving or criticizing it.
Although lay readers can also view the submitted
articles, the site says postings are only for scientists in the discipline,
who must list their names and institutional e-mail addresses.
The journal -- published by the Nature Publishing
Group, a division of Macmillan Publishers Ltd., of London -- said it will
discard any comments found to be irrelevant, intemperate or otherwise
inappropriate.
Nature's editors said they will take both sets of
comments -- the traditional peer-review opinions and the online remarks --
into consideration when deciding whether to publish a study, The Journal
reported.
October 5, 2006 message from Carolyn Kotlas
[kotlas@email.unc.edu]
NEW TAKE ON PEER REVIEW OF SCHOLARLY PAPERS
The Public Library of Science will launch its
first open peer-reviewed journal called PLoS ONE which will focus on
papers in science and medicine. Papers in PLoS ONE will not undergo
rigorous peer review before publication. Any manuscripts that is deemed
to be a "valuable contribution to the scientific literature" can be
posted online, beginning the process of community review. Authors are
charged a fee for publication; however, fees may be waived in some
instances. For more information see
http://www.plosone.org/.
For an article on this venture, see: "Web Journals Threaten
Peer-Review System" By Alicia Chang, Yahoo! News, October 1, 2006 ---
http://news.yahoo.com/s/ap/20061001/ap_on_sc/peer_review_science
A New Model for Peer Review in Which
Reviewer Comments are Shared With the World
Peer Reviewers Comments are Open for All to See in New Biology Journal
From the University of Illinois Issues in Scholarly Communication
Blog, February 15, 2006 ---
http://www.library.uiuc.edu/blog/scholcomm/
BioMed
Central has launched Biology Direct, a new online open access
journal with a novel system of peer review. The journal will operate
completely open peer review, with named peer reviewers' reports
published alongside each article. The author's rebuttals to the
reviewers comments are also published. The journal also takes the
innovative step of requiring that the author approach Biology Direct
Editorial Board members directly to obtain their agreement to review
the manuscript or to nominate alternative reviewers. [Largely taken
from a BioMed Central press report.]
Biology Direct launches with publications
in the fields of Systems Biology, Computational Biology, and
Evolutionary Biology, with an Immunology section to follow soon. The
journal considers original research articles, hypotheses, and
reviews and will eventually cover the full spectrum of biology.
Biology Direct is led by Editors-in-Chief
David J Lipman, Director of the National Center Biotechnology
Information (NCBI), a division of the National Library of Medicine (NLM)
at NIH, USA; Eugene V Koonin, Senior Investigator at NCBI; and Laura
Landweber, Associate Professor at Princeton University, Princeton,
NJ, USA.
For more information about the journal or about how to submit a
manuscript to the journal, visit the Biology Direct website ---
http://www.biology-direct.com/
Bob Jensen's threads on peer review controversies are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#PeerReview
As David Bartholomae observes, “We make a huge
mistake if we don’t try to articulate more publicly what it is we value in
intellectual work. We do this routinely for our students — so it should not be
difficult to find the language we need to speak to parents and legislators.” If
we do not try to find that public language but argue instead that we are not
accountable to those parents and legislators, we will only confirm what our
cynical detractors say about us, that our real aim is to keep the secrets of our
intellectual club to ourselves. By asking us to spell out those secrets and
measuring our success in opening them to all, outcomes assessment helps make
democratic education a reality.
Gerald Graff, "Assessment Changes
Everything," Inside Higher Ed, February 21, 2008 ---
http://www.insidehighered.com/views/2008/02/21/graff
Gerald Graff is professor of English at the University of Illinois at Chicago
and president of the Modern Language Association. This essay is adapted from a
paper he delivered in December at the MLA annual meeting, a version of which
appears on the MLA’s Web site and is reproduced here with the association’s
permission. Among Graff’s books are Professing Literature, Beyond the
Culture Wars and Clueless in Academe: How School Obscures the Life of the Mind.
The consensus report, which was approved by the
group’s international board of directors, asserts that it is vital when
accrediting institutions to assess the “impact” of faculty members’ research on
actual practices in the business world.
"Measuring ‘Impact’ of B-School Research," by Andy Guess, Inside
Higher Ed, February 21, 2008 ---
http://www.insidehighered.com/news/2008/02/22/impact
Ask anyone with an M.B.A.: Business school provides
an ideal environment to network, learn management principles and gain access
to jobs. Professors there use a mix of scholarly expertise and business
experience to teach theory and practice, while students prepare for the life
of industry: A simple formula that serves the school, the students and the
corporations that recruit them.
Yet like
any other academic enterprise, business schools expect their
faculty to produce peer-reviewed research. The relevance,
purpose and merit of that research has been debated almost
since the institutions started appearing, and now a new
report promises to add to the discussion — and possibly stir
more debate. The Association to Advance Collegiate Schools
of Business on Thursday released the final report of its
Impact of Research Task Force, the
result of feedback from almost 1,000 deans, directors and
professors to a preliminary draft circulated in August.
The consensus
report, which was approved by the group’s international
board of directors, asserts that it is vital when
accrediting institutions to assess the “impact” of faculty
members’ research on actual practices in the business world.
But it does not settle on concrete metrics for impact,
leaving that discussion to a future implementation task
force, and emphasizes that a “one size fits all” approach
will not work in measuring the value of scholars’ work.
The report
does offer suggestions for potential measures of impact. For
a researcher studying how to improve manufacturing
practices, impact could be measured by counting the number
of firms adopting the new approach. For a professor who
writes a book about finance for a popular audience, one
measure could be the number of copies sold or the quality of
reviews in newspapers and magazines.
“In the
past, there was a tendency I think to look at the
[traditional academic] model as kind of the desired
situation for all business schools, and what we’re saying
here in this report is that there is not a one-size-fits-all
model in this business; you should have impact and
expectations dependent on the mission of the business school
and the university,” said Richard Cosier, the dean of the
Krannert School of Management at Purdue University and vice
chair and chair-elect of AACSB’s board. “It’s a pretty
radical position, if you know this business we’re in.”
That
position worried some respondents to the initial draft, who
feared an undue emphasis on immediate, visible impact of
research on business practices — essentially, clear
utilitarian value — over basic research. The final report
takes pains to alleviate those concerns, reassuring deans
and scholars that it wasn’t minimizing the contributions of
theoretical work or requiring that all professors at a
particular school demonstrate “impact” for the institution
to be accredited.
“Many
readers, for instance, inferred that the Task Force believes
that ALL intellectual contributions must be relevant to and
impact practice to be valued. The position of the Task Force
is that intellectual contributions in the form of basic
theoretical research can and have been extremely valuable
even if not intended to directly impact practice,” the
report states.
“It also is
important to clarify that the recommendations would not
require every faculty member to demonstrate impact from
research in order to be academically qualified for AACSB
accreditation review. While Recommendation #1 suggests that
AACSB examine a school’s portfolio of intellectual
contributions based on impact measures, it does not specify
minimum requirements for the maintenance of individual
academic qualification. In fact, the Task Force reminds us
that to demonstrate faculty currency, the current standards
allow for a breadth of other scholarly activities, many of
which may not result in intellectual contributions.”
Cosier, who
was on the task force that produced the report, noted that
business schools with different missions might require
differing definitions of impact. For example, a traditional
Ph.D.-granting institution would focus on peer-reviewed
research in academic journals that explores theoretical
questions and management concepts. An undergraduate
institution more geared toward classroom teaching, on the
other hand, might be better served by a definition of impact
that evaluated research on pedagogical concerns and learning
methods, he suggested.
A further
concern, he added, is that there simply aren’t enough
Ph.D.-trained junior faculty coming down the pipeline, let
alone resources to support them, to justify a single
research-oriented model across the board. “Theoretically,
I’d say there’s probably not a limit” to the amount of
academic business research that could be produced, “but
practically there is a limit,” Cosier said.
But
some critics have worried that the
report could encourage a focus on the immediate impact of
research at the expense of theoretical work that could
potentially have an unexpected payoff in the future.
Historically, as the report notes, business scholarship was
viewed as inferior to that in other fields, but it has
gained esteem among colleagues over the past 50 or so years.
In that context, the AACSB has pursued a concerted effort to
define and promote the role of research in business schools.
The report’s concrete recommendations also include an awards
program for “high-impact” research and the promotion of
links between faculty members and managers who put some of
their research to use in practice.
The
recommendations still have a ways to go before they become
policy, however. An implementation task force is planned to
look at how to turn the report into a set of workable
policies, with some especially worried about how the
“impact” measures would be codified. The idea, Cosier said,
was to pilot some of the ideas in limited contexts before
rolling them out on a wider basis.
Jensen Comment
It will almost be a joke to watch leading accountics researchers trying of show
how their esoteric findings have impacted the practice world when the professors
themselves cannot to point to any independent replications of their own work ---
http://faculty.trinity.edu/rjensen/Theory01.htm#Replication
Is the practice world so naive as to rely upon findings of scientific research
that has not been replicated?
Real Versus Phony Book Reviews
If your paper was rejected for publication, call the FBI
"When authors attack" Candace Sams's decision to report bad Amazon
reviewers to the FBI is further proof why it's best not to respond publicly to
your critics," by Allison Flood, The Guardian, December 23, 2009 ---
http://www.guardian.co.uk/books/booksblog
Candace Sams's decision to report bad Amazon
reviewers to the FBI is further proof why it's best not to respond publicly
to your critics.
This year has seen its
fair share of authors kicking off about poor reviews, from Alice Hoffman,
who called a Boston Globe critic a "moron" on Twitter
following a negative review of her novel The Story Sisters, to Alain de
Botton, who
posted an excoriating comment on a reviewer's blog
after a poor write-up for The Pleasures and Sorrows of Work in the New York
Times. But the latest upset, played out on the pages of Amazon, is possibly
the weirdest.
Not only does it centre on
the dire-sounding romance novel,
Electra Galaxy's Mr Interstellar Feller (product
description: "When a handsome yet stuffy intergalactic cop is forced to
enter the Electra Galaxy's Mr Interstellar Feller competition, and is
partnered with an Earth cop as his manager and overseer, hilarity and
romance ensue"), but it takes the bizarro quotient to new levels.
After
Amazon reviewer LB Taylor gave the novel one star,
calling it "a sad excuse for romance, mystery, and humor", she found herself
attacked online by one NiteflyrOne – shortly outed by commentors as Candace
Sams, author of the novel. With the discussion numbering almost 400 posts,
Sams has now deleted her posts. Fortunately,
they've been saved for posterity
by a host of sites.
"Authors,"
she wrote, "rarely have full editorial control;
rarely do they have even 'scant' control over their covers or the language
used in dialogue or even sequencing of scenes: love scenes, kissing scenes,
scenes of violence, etc. These are ultimately controlled by editorial
staff…very rarely the author alone." Oh I see – blame the editor.
And later,
in response to another (also negative) review: "It
might behoove them to understand that all romances will not read they way
they think they should; romances should 'not' be cookie-cutters of one
another. This has been the biggest complaint about romance on the whole -
that they all sound alike. Apparently 'some' reviewers 'want' them to sound
alike. When they don't, they aren't able to handle the material."
She then
tells the thread that she's reporting naysayers
to the FBI.
This is wonderfully batty
stuff – on a par, I'd say, with Anne Rice's 2004 outburst on Amazon when she
told negative reviewers they were "interrogating this text from the wrong
perspective". "Your stupid, arrogant assumptions about me and what I am
doing are slander,"
she wrote. "You have
used the site as if it were a public urinal to publish falsehood and lies."
And I have to say, while I
agree with
Neil Gaiman's point that the Sams affair is "a
horrible car crash [and] if any of you are ever tempted to respond to bad
reviews or internet trolls etc, it's a salutary reminder of why some things
are better written in anger and deleted in the morning", I find angry author
responses strangely compelling. I like seeing flashes of the person behind
the book, and while responding may do the author's reputation no good at all
– turning the other cheek being the best way to deal with negative reviews -
I can see why they might do it anyway. Yes, it's a car crash, but I can't
stop rubber-necking
Jensen Comment
I've more suspicious of authors and/or publishers planting phony raving reviews.
There's a lot of moral hazard here.
Cause Versus Correlation
"How Non-Scientific Granulation Can
Improve Scientific Accountics"
http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
By Bob Jensen
This essay takes off from the following quotation:
A recent accountics science study suggests
that audit firm scandal with respect to someone else's audit may be a reason
for changing auditors.
"Audit Quality and Auditor Reputation: Evidence from Japan," by Douglas J.
Skinner and Suraj Srinivasan, The Accounting Review, September 2012,
Vol. 87, No. 5, pp. 1737-1765.
Our conclusions are subject
to two caveats. First, we find that clients switched away from ChuoAoyama in
large numbers in Spring 2006, just after Japanese regulators announced the
two-month suspension and PwC formed Aarata. While we interpret these events
as being a clear and undeniable signal of audit-quality problems at
ChuoAoyama, we cannot know for sure what drove these switches
(emphasis added).
It is possible that the suspension caused firms to switch auditors for
reasons unrelated to audit quality. Second, our analysis presumes that audit
quality is important to Japanese companies. While we believe this to be the
case, especially over the past two decades as Japanese capital markets have
evolved to be more like their Western counterparts, it is possible
that audit quality is, in general, less important in Japan
(emphasis added) .
Research Questions About the Corporate Ratings Game
"How Good Are Commercial Corporate Governance
Ratings?," by Bill Snyder, Stanford GSB News, June 2008 ---
http://www.gsb.stanford.edu/news/research/larker_corpgov.html
STANFORD GRADUATE SCHOOL OF BUSINESS—A
study by Stanford law and business faculty members casts strong doubt upon
the value and validity of the ratings of governance advisory firms that
compile indexes to evaluate the effectiveness of a publicly held company’s
governance practices.
Enron, Worldcom, Global Crossing, Sunbeam. The list
of major corporations that appeared rock solid—only to founder amid scandal
and revelations of accounting manipulation—has grown, and with it so has
shareholder concern. In response, a niche industry of corporate watchdog
firms has arisen—and prospered.
Governance advisory firms compile indexes that
evaluate the effectiveness of a publicly held company’s governance
practices. And they claim to be able to predict future performance by
performing a detailed analysis encompassing many variables culled from
public sources.
Institutional Shareholder Services, or ISS, the
best known of the advisory companies, was sold for a reported $45 million in
2001. Five years later, ISS was sold again; this time for $553 million to
the RiskMetrics Group. The enormous appreciation in value underscores the
importance placed by the investing public on ratings and advisories issued
by ISS and its major competitors, including Audit Integrity, Governance
Metrics International (GMI), and The Corporate Library (TCL).
But a study by faculty at the
Rock Center for Corporate Governance at Stanford
questions the value of the ratings of all four firms. “Everyone would agree
that corporate governance is a good thing. But can you measure it without
even talking to the companies being rated?” asked David Larcker, codirector
of the Rock Center and the Business School’s James Irvin Miller Professor of
Accounting and one of the authors. “There’s an industry out there that
claims you can. But for the most part, we found only a tenuous link between
the ratings and future performance of the companies.”
The study was extensive, examining more than 15,000
ratings of 6,827 separate firms from late 2005 to early 2007. (Many of the
corporations are rated by more than one of the governance companies.) It
looked for correlations among the ratings and five basic performance
metrics: restatements of financial results, shareholder lawsuits, return on
assets, a measure of stock valuation known as the Q Ratio, and Alpha—a
measure of an investment’s stock price performance on a risk-adjusted basis.
In the case of ISS, the results were particularly
shocking. There was no significant correlation between its Corporate
Governance Quotient (or CGQ) ratings and any of the five metrics. Audit
Integrity fared better, showing “a significant, but generally substantively
weak” correlation between its ratings and four of the five metrics (the Q
ratio was the exception.) The other two governance firms fell in between,
with GMI and TCL each showing correlation with two metrics. But in all three
cases, the correlations were very small “and did not appear to be useful,”
said Larcker.
There have been many academic attempts to develop a
rating that would reflect the overall quality of a firm’s governance, as
well as numerous studies examining the relation between various corporate
governance choices and corporate performance. But the Stanford study appears
to be the first objective analysis of the predictive value of the work of
the corporate governance firms.
The Rock Center for Corporate Governance is a joint
effort of the schools of business and law. The research was conducted
jointly by Robert Daines, the Pritzker Professor of Law and Business, who
holds a courtesy appointment at the Business School; Ian Gow, a doctoral
student at the Business School; and Larcker. It is the first in a series of
multidisciplinary studies to be conducted by the Rock
Center and the
Corporate Governance Research Program
The current study also examined the proxy
recommendations to shareholders issued by ISS, the most influential of the
four firms. The recommendations delivered by ISS are intended to guide
shareholders as they vote on corporate policy, equity compensation plans,
and the makeup of their company’s board of directors. The researchers
initially assumed that the ISS proxy recommendations to shareholders also
reflect their ratings of the corporations.
But the study found there was essentially no
relation between its governance ratings and its recommendations. “This is a
rather odd result given that [ISS’s ratings index] is claimed to be a
measure of governance quality, but ISS does not seem to use their own
measure when developing voting recommendations for shareholders,” the study
says. Even so, the shareholder recommendations are influential; able to
swing 20 to 30 percent of the vote on a contested matter, says Larcker.
There’s another inconsistency in the work of the
four rating firms. They each look at the same pool of publicly available
data from the Securities and Exchange Commission and other sources, but use
different criteria and methodology to compile their ratings.
ISS says it formulates its ratings index by
conducting “4,000-plus statistical tests to examine the links between
governance variables and 16 measures of risk and performance.” GMI collects
data on several hundred governance mechanisms ranging from compensation to
takeover defenses and board membership. Audit Integrity’s AGR rating is
based on 200 accounting and governance metrics and 3,500 variables while The
Corporate Library does not rely on a quantitative analysis, instead
reviewing a number of specific areas, such as takeover defenses and
board-level accounting issues.
Despite the differences in methodology, one would
expect that the bottom line of all four ratings—a call on whether a given
corporation is following good governance practices—should be similar. That’s
not the case. The study found that there’s surprisingly little correlation
among the indexes the rating firms compile. “These results suggest that
either the ratings are measuring very different corporate governance
constructs and/or there is a high degree of measurement error (i.e., the
scores are not reliable) in the rating processes across firms,” the
researchers wrote.
The study is likely to be controversial. Ratings
and proxy recommendations pertaining to major companies and controversial
issues such as mergers are watched closely by the financial press and
generally are seen as quite credible. Indeed, board members of rated firms
spend significant amounts of time discussing the ratings and attempt to
bring governance practices in line with the standards of the watchdogs, says
Larcker.
But given the results of the Stanford
study, the time and money spent by public companies on improving governance
ratings does not appear to result in significant value for shareholders.
Qualifying to Sit for the CPA Examination
To my knowledge there are no lists of accredited accounting education
programs maintained by professional societies like the NASBA, AICPA, IMA, etc.
Each of the 50-state branches of NASBA control who can sit for the CPA
examination in a given state. Most states require coverage of certain modules
(e.g., auditing, business law, etc.) in courses passed at regionally-accredited
colleges and universities. Hence, I think the only hurdle is that the courses
completed be from regionally-accredited institutions. It is not required that
these courses have AACSB accreditation of accounting or even be accounting
courses per se.
For a time, a private liberal arts college called Colorado College dropped
its Department of Business Administration. However, in order to afford an
accounting track to its students, it started up new accounting courses in the
Economics Department that were designed to provide the necessary modules
required to sit for the CPA examination in Colorado. Even though these were
“economics” courses, they offered modules to satisfy the Colorado State Board of
Accountancy. I think years later, Colorado College once again started up a
Department of Business Administration.
In the U.S. there are layers of accreditation, the first being regional
accreditation by accrediting bodies recognized by the U.S. Department of
Education ---
http://en.wikipedia.org/wiki/Educational_accreditation
Regional accreditation is essential for acceptance of transfer credits. I think
that credits earned by U.S. colleges that are not regionally accredited will not
be accepted by most state boards of accountancy. The gray zone here is in
acceptance of courses taken at foreign universities.
In the U.S. distance education colleges generally are second class
institutions until they get regional accreditation. Some have had success here
such as the University of Phoenix and Western Governors University. WGU knew
that it would fail unless it got regional accreditation. After it got such
accreditation it commenced to thrive as a distance education alternative
admission to an onsite regionally-accredited institution ---
http://faculty.trinity.edu/rjensen/crossborder.htm
I don’t think any of the following alternatives for accreditation of U.S.
programs matter for sitting for professional examinations such as the CPA
examination.
Many highly respected accounting education programs only opt for AACSB
business accreditation, including the most respected Ivy League-type university
accounting programs. It is not clear what advantage an AACSB accounting
accreditation provides except in the case of a college with a marginal
reputation that wants to give an added boost to its accounting program image.
You will find a number of those listed at
http://www.aacsb.edu/General/InstLists.asp?lid=4
The next level in Business Administration is actually a fork in the road
depending. The international specialized business accreditation body AACSB is
the only business education accreditation body recognized by the U.S. Department
of Education. The AACSB accredits undergraduate business, undergraduate
accounting, and MBA programs, but it does not accredit doctoral programs. Any
business or accounting doctoral program with a respectable reputation, however,
has lower-level AACSB accreditation.
Business programs that feel they cannot obtain or otherwise do not want AACSB
accreditation (it’s expensive to get and maintain), sometimes opt for some other
type of accreditation so they can call themselves “accredited” in their PR
materials. The most respected “other” accrediting agencies like ACBSP are listed
at
http://www.allbusinessschools.com/faqs/accreditation
These vary in standards and monitoring of standards compliance. I personally
don’t think these add a whole lot of benefit except where student prospects do
not understand the variation in the term “accreditation.”
Also see
http://en.wikipedia.org/wiki/List_of_recognized_accreditation_associations_of_higher_learning
Especially note the list of professional accrediting bodies at
http://en.wikipedia.org/wiki/List_of_recognized_accreditation_associations_of_higher_learning#Professional_accreditation
These are mostly training programs that do not grant traditional college
degrees.
Bob Jensen's career helpers are at
http://faculty.trinity.edu/rjensen/Bookbob1.htm#careers
Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."
Teaching versus Research versus Education
October 24, 2007 message from XXXXX
Bob,
I'm writing this to get your personal view of the relationship between
teaching and research? I think there's lots of ways to potentially answer
this question, but I'm curious as to your thoughts.
October 27, 2007 reply from Bob Jensen
Hi XXXXX,
Wow! This is a tough question!.
Since I know you're an award-winning teacher, I hope you will identify
yourself on the AECM and improve upon my comments below.
Your question initially is to comment on the relation between teaching and
research. In most instances research at some point in time led to virtually
everything we teach. In the long-run research thus becomes the foundation of
teaching. In the case of accounting education this research is based heavily on
normative and case method research. Many, probably most,
accountics researchers are not outstanding teachers of undergraduate
accounting unless they truly take the time for both preparation and student
interactions.
New education technologies may especially help these researchers teach better.
For example, adding video such as the BYU variable speed video described below
may replace bad lecturing in live classes with great video learning modules.
Similarly, master teachers and master educators are sometimes reputed researchers, but this is
probably the exception rather than the rule. Researchers have trouble finding
the time for great class preparation and open-door access.
********************
Firstly your question can be answered at the university-wide level where experts
think that students, especially undergraduate students, get short changed by
research professors. Top research professors sometimes only teach doctoral
students or advanced masters students who are already deemed experts.
Research professors often prefer this arrangement so that they can focus
upon there research even when "teaching" a tortured
esoteric course. Undergraduate students in
these universities are often taught by graduate student instructors who have
many demands on their time that impedes careful preparation for teaching
each class and for giving students a lot of time outside of class.
Often the highest ranked universities are among the worst universities in
terms of teaching. See
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DoNotExcel
When top researchers are assigned undergraduate sections, their
sections are often the least popular. A management science professor years
ago (a top Carnegie-Mellon graduate) on the faculty at Michigan State University had no
students signing up for his elective courses. When assigned sections of
required courses, he only got students if students had no choice regarding
which section of a course they were forced into by the department head. This
professor who was avoided by students at almost all costs was one of the
most intelligent human beings I ever met in my entire life.
One of the huge problems is that research professors give more attention
to research activities than day-to-day class preparation. Bad preparation,
in turn, short changes students expecting more from teachers. I've certainly
experienced this as a student and as a faculty member where I've sometimes
been guilty of this as I look back in retrospect. A highly regarded
mathematics researcher at Stanford years ago had a reputation of being always unprepared for class. He often could not solve
his own illustrations
in class, flubbed up answering student questions, and confused himself while lecturing
in a very disjointed and unprepared manner. This is forgivable now an then,
but not repeatedly to a point where his campus reputation for bad teaching
is known by all. Yet if there was a Nobel Prize for mathematics, he would
have won such a prize. John Nash (the
"Beautiful Mind" at Princeton University who did win a Nobel Prize in
economics) had a similar teaching reputation, although his problems were
confounded by mental illness.
Then again, sometimes top researchers, I mean very top award-winning
researchers, are also the master teachers. For example, Bill Beaver, Mary
Barth, and some other top accounting research professors repeatedly won
outstanding teaching awards when teaching Stanford's MBA students and
doctoral students. I think in these instances,
their research makes them better teachers because they had so much leading
edge material to share with students. Some of our peers are just good at
anything they seriously undertake.
But when it gets down to it, there's no single mold for a top teacher and
a top educator. And top educators are often not award-sinning teachers.
Extremely popular teachers are not necessarily top educators ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
In fact, some top educators may be unpopular teachers who get relatively
low student evaluations. In a somewhat analogous manner, the best physicians
may get low ratings from patients due to abrupt, impersonal, and otherwise
lousy bedside manners. Patients generally want the best physicians even when
bedside manners are lousy. This is not always the case with students. For
example, an educator who realizes that student learn better when they're not
spoon fed and have to work like the
little red hen (plant the seed, weed the
field, fend off the pests, harvest the grain, mill the grain, and bake their
own meals) prefer their fast-food instructors, especially the easy grading
fast food instructors.
********************
Secondly your question can be answered at an individual level regarding
what constitutes a master educator or a master teacher. There are no molds
for such outstanding educators. Some are great researchers as well as being
exceptional teachers and/or educators. Many are not researchers, although
some of the non-researchers may be scholarly writers.
Some pay a price for devoting their lives to education administration and
teaching rather than research. For example, some who win all-campus teaching
awards and are selected by students and alumni as being the top educators on
campus are stuck as low paying associate professorship levels because they
did not do the requisite research for higher level promotions and pay.
Master Educators Who Deliver Exceptional Courses or
Entire Programs
But Have Little Contact With Individual Students
Before reading this section, you should be familiar with the document at
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Master educators can also be outstanding researchers, although research
is certainly not a requisite to being a master educator. Many master
educators are administrators of exceptional accounting education programs.
They're administrative duties typically leave little time for research,
although they may write about education and learning. Some master educators
are not even tenure track faculty.
What I've noticed in recent years is how technology can make a huge
difference. Nearly every college these days has some courses in selected
disciplines because they are utilizing some type exciting technology. Today I returned from a trip to Jackson, Mississippi where I
conduced a day-long
CPE session
on education technology for accounting educators in Mississippi
(what great southern hospitality by the way). So the audience would not have
to listen to me the entire day, I invited Cameron Earl from Brigham Young
University to make a presentation that ran for about 90 minutes. I learned
some things about top educators at BYU, which by the way is one
of the most respected universities in the world. If you factor out a
required religion course on the Book of Mormon, the most popular courses on
the BYU campus are the two basic accounting courses. By popular I mean in
terms of thousands of students who elect to take these courses even if they
have no intention of majoring in business or economics where these two courses are
required. Nearly all humanities and science students on campus try to sign up for
these two accounting courses.
After students take these two courses, capacity constraints restrict the
numbers of successful students in these courses who are then allowed to
become accounting majors at BYU. I mean I'm talking about a very, very small
percentage who are allowed to become accounting students. Students admitted
to the accounting program generally have over 3.7 minimum campus-wide grade
averages.
This begs the question of what makes the two basic accounting courses so
exceptionally popular in such a large and prestigious university?
- These two basic accounting courses are not sought out for easy
grades. In fact they are among the hardest courses for high grades at
BYU. I think that this is probably true in most business schools in the
nation.
- These two BYU courses are not sought out for face-to-face contact
with the instructor. The courses have thousands of students each term
such that most students do not see the instructor outside of class even
though he's available over ten hours per week for those who seek him
out. Each course only meets in live classes eight times per semester.
Most of the speakers in those eight classes are outstanding visiting
speakers who add a great deal to the popularity of the course. This is
often one difference between a course run by a master educator versus a
master teacher. A master educator often brings in top talent to inspire
and educate students.
- The courses undoubtedly benefit from the the shortage of accounting
graduates in colleges nationwide and the exceptional career
opportunities for students who want careers in accounting, taxation,
law, business management, government, criminal justice, and other
organizations. But these accountancy advantages exist for every college
that has an accounting education program. Most all colleges do not have
two basic accounting courses that are sought out by every student in the
entire university. That makes BYU's two basic accounting courses truly
exceptional.
- Some courses in every college are popular these days because they
are doing something exceptional with technology. These two BYU courses
increased in popularity when a self-made young man became a
multimillionaire and decided to devote his life to being a master
educator in these two accountancy courses at BYU. His name is Norman
Nemrow. He runs these courses full time without salary at BYU and is
neither a tenure track faculty member or a noted researcher at BYU. I
think he qualifies, however, as an education researcher even if he does
not publish his findings in academic journals. The video disks are
available to anyone in the world for a relatively small fee that goes to
BYU, but BYU is not doing this for purposes of making great profits. You
can read more about how to get the course disks at the following links:
- The students in these two courses learn the technical aspects of
from variable-speed video disks that were produced by Norman and a team
of video and learning experts. Cameron Earl is a recent graduate of BYU
who is part of the technical team that delivers these two courses on
video. Formal studies of Nemrow's video courses indicate that students
generally prefer to learn from the video relative to live lectures. The
course has computer labs run by teaching assistants who can give live
tutorials to individual students, but most students who have the video
disks for their own computers do not seek out the labs.
Trivia Question
At BYU most students on campus elect to take Norman Nemrow's two basic
accounting courses. In the distant past, what exceptional accounting
professor managed to get his basic accounting courses required at a renowned
university while he was teaching these courses?
Trivia Answer
Bill Paton is one of the all-time great accounting professors in history.
His home campus was the University of Michigan, and for a period of time
virtually all students at his university had to take basic accounting (or at
least so I was told by several of Paton's former doctoral students). Bill
Paton was one of the first to be inducted into the
Accounting Hall of Fame.
As an aside, I might mention
that I favor requiring two basic accounting courses for every
student admitted to a college or university, including colleges
who do not even have business education programs.
But the "required accounting
courses" would not, in my viewpoint, be a traditional basic
accounting courses. About two thirds or more of these courses
should be devoted to personal finance, investing, business law,
tax planning. The remainder of the courses should touch on
accounting basics for keeping score of business firms and
budgeting for every organization in society.
At the moment, the majority of
college graduates do not have a clue about the time value of
money and the basics of finance and accounting that they will
face the rest of their lives. |
There are other ways of being "mastery educators" without being master
teachers in a traditional sense. Three professors of accounting at the
University of Virginia developed and taught a year-long intermediate
accounting case where students virtually had to teach themselves in a manner
that they found painful and frustrating. But there are metacognitive reasons
where the end result made this year-long active learning task one of the
most meaningful and memorable experiences in their entire education ---
http://faculty.trinity.edu/rjensen/265wp.htm
They often painfully grumbled with such comments as "everything I'm learned
in this course I'm having to learn by myself."
You can read about mastery learning and all its frustrations at
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
"A Philosophy of Teaching," by Rob Jenkins, Chronicle of Higher
Education, September 20, 2011 ---
http://chronicle.com/article/A-Philosophy-of-Teaching/129060/
Most teaching statements are written by people
who—let's be honest—don't really know that much about teaching. Usually the
writers are first-time job seekers with, at best, a year or two as a
graduate assistant or an adjunct under their belts.
Battle-scarred classroom veterans, unless they
happen to be going on the market, rarely write a statement of teaching
philosophy. But maybe they should.
My philosophy of teaching has been forged over more
than 32 years, 26 of those as an instructor. As a student, I attended a
private liberal-arts college and a midsized regional university. I've taught
at a large land-grant university, a small rural community college, a large
metropolitan community college, and a suburban technical college.
Like everyone in the profession, I came to the job
with a number of preconceived notions, based partly on observations of my
own teachers, both good and bad, and partly on my perception of how things
should operate in a perfect world. Most of those notions proved false or
impractical, and the jury is still out on the rest.
In addition, since I also spent 11 years
supervising faculty members, my teaching philosophy has been profoundly
influenced by my experiences with colleagues. I've had the great good
fortune to observe and learn from some of the best teachers in the world.
I've also known a few faculty members whose chief contribution to my
development was to strengthen my resolve never—ever—to do certain
things.
Please note that in sharing my philosophy, I'm not
suggesting that it's the definitive approach or encouraging anyone else to
adopt it. I'm simply sharing what I've come to believe.
College students are adults. I
wrote about that truism in some depth back in August of 2010 ("Welcome
to My Classroom"), but it bears revisiting as one
element of a more comprehensive philosophy.
People tend to rise or fall to the level that is
expected of them. Make it clear that you think students are stupid and, odds
are, they will underperform. Act like you expect them to misbehave, and your
classroom will probably resemble a war zone. But if you tell students
upfront that you consider them to be adults, and then treat them
accordingly, most will attempt to live up to the label. That's certainly
been the case in my classroom over the years.
Treating students like adults means you allow them
the freedoms that adults enjoy—to be late for class, for instance, to miss
it altogether, or to leave early if that's what they need to do. At the same
time, you make it clear that, as adults, they are responsible for all the
material in the course, whether or not they were in class on a particular
day.
That approach has profound implications for every
aspect of classroom management, from discipline to attendance to late
papers. Students like it because they think of themselves as adults and
appreciate being viewed that way. (College students despise few things more
than being treated as though they were still in high school.) And it's good
for professors because it shifts the responsibility for "keeping up" onto
the students, where it belongs.
Teaching is performance art. I
wish I had coined that phrase, or at least knew who did. I just know that it
has become one of my foundational beliefs.
The concept of the teacher as performer, as "the
sage on the stage," has fallen out of favor in recent years. But the fact
is, we are sages and we are on a stage. How we perform—that is, how we
teach—is every bit as important as what we teach.
Moreover, how our students respond to us—and by
extension, to our subject matter—depends largely on the quality of the
performance we give in class, day in and day out. Want to engage your
students, capture their interest, motivate them to do more and be more? Then
pay attention to voice inflection and body language, just as an actor would.
Practice your timing. Play to your audience. Inject some humor. Entertain.
That doesn't mean you have to make yourself the
focal point of the classroom all the time. Class discussions, group work,
and other non-teacher-centric strategies can also be effective. But when the
curtain goes up and it's your time to shine, go out there and knock 'em
dead.
Great teachers may be born, but good
teachers are made. The ability to become a great teacher—one who
inspires students and seems to connect with them effortlessly—is a gift, an
innate talent like musical ability or athletic prowess.
Just like any other gift, it can either be
squandered or put to good use. The very best teachers are those who have the
gift and have worked hard over many years to further develop it—although we
often overlook the hard work because they make being a great teacher look so
easy.
Continued in article
Bob Jensen's threads on metacognitive teaching and learning ---
http://faculty.trinity.edu/rjensen/265wp.htm
Question
What types of students benefit most versus least from video lectures?
"Video Lectures May Slightly Hurt Student Performance," by Sophia Li,
Inside Higher Ed, Chronicle of Higher Education, June 21, 2010 ---
http://chronicle.com/blogPost/Video-Lectures-May-Slightly/24963/
No clear winner emerges in the contest between
video and live instruction, according to the
findings of a recent study led by David N.
Figlio, a professor of education and social policy at Northwestern
University. The study found that students who watched lectures online
instead of attending in-person classes performed slightly worse in the
course over all.
A previous
analysis by the U.S. Department of Education that
examined existing research comparing online and live instruction favored
online learning over purely in-person instruction, according to
the working paper
by Mr. Figlio and his colleagues, which was released
this month by the National Bureau of Economic Research.
But Mr. Figlio's study contradicted those results,
showing that live instruction benefits Hispanic students, male students, and
lower-achieving students in particular.
Colleges and universities that are turning to video
lectures because of their institutions' tight budgets may be doing those
students a disservice, said Mark Rush, a professor of economics at the
University of Florida and one of the working paper's authors.
More research will be necessary, however, before
any definite conclusions can be drawn about the effectiveness of video
lectures, said Lu Yin, a graduate student at the University of Florida who
worked on the project. Future research could study the effectiveness of
watching lectures online for topics other than microeconomics, which was the
subject of the course evaluated in the study, Ms. Yin said.
Jensen Comment
Studies like this just do not extrapolate well into the real world, because so
very, very much depends upon both how instructors use videos and how students
use videos. My students had to take my live classes, but my Camtasia video
allowed them to keep going over and over, at their own learning pace, technical
modules (PQQ Possible Quiz Questions) until they got technical things down pat
---
http://www.cs.trinity.edu/~rjensen/video/acct5342/
Students who did not use the videos as intended usually paid a price.
However, some outcomes in the above study conform to my priors. For example,
Brigham Young University (BYU) has very successfully replaced live lectures with
variable-speed video lectures in the first two basic accounting courses ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo
However, BYU students most likely have mostly high achieving students to
begin with, especially in accounting. It would be interesting to formally study
the use such variable-speed video in colleges having a higher proportion of
lower-achieving students. My guess is that the variable-speed video lectures
would be less effective with lower-achieving students who are not motivated to
keep replaying videos until they get the technical material down pat. The may be
lower achieving in great measure because they are less motivated learnings or
learners who have too many distractions (like supportingchildren) to have as
much quality study time.
And live lecturing/mentoring is hard to put in a single category because
there are so many types of live lecturing/mentoring ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
In conclusion, I think much depends upon the quality of the video versus
lecture, class size, and student motivation. Videos offer the tremendous
advantage of instant replay and being able to adjust to the best learning pace
of the student. Live lectures can, and often do, lead to more human interactive
factors that can be good (if they motivate) and bad (if they distract or instill
dysfunctional fear).
The best video lectures are probably those that are accompanied with instant
messaging with an instructor or tutor that can provide answers or clues to
answers not on the video.
Questions
How well do student evaluations of instructors predict performance in subsequent
advanced courses?
Are popular teachers necessarily the best teachers?
Are students misled by grade inflation?
One of the major points of the study was its look at
the effectiveness of student evaluations. Although the evaluations can
accurately predict the performance of the student in the “contemporaneous”
course — the course in which the professor teaches the student — they are “very
poor” predictors of the performance of a professor’s students in later,
follow-up courses. Because many universities use student evaluations as a factor
in decisions of promotion and tenure, this “draws into question how one should
measure professor quality,” according to the report.
See below
"Evaluating Faculty Quality, Randomly," by James Heggen, Inside Higher
Ed, July 11, 2008 ---
http://www.insidehighered.com/news/2008/07/11/evaluation
The question of how to measure the quality of
college teaching continues to vex campus administrators. Teaching
evaluations, on which many institutions depend for at least part of their
analysis, may be overly influenced by factors such as whether students like
the professors or get good grades. And objective analyses of how well
students learn from certain professors are difficult because, for one, if
based on a standardized test or grades, one could run into problems because
professors “teach to the test.”
A new paper tries to inject some rigorous analysis
into the discussion of how well students learn from their professors and how
effectively student evaluations track how well students learn from
individual instructors.
James West and Scott Carrell co-wrote the study, which was released by
the National Bureau of Economic Research.
“Does Professor
Quality Matter? Evidence from Random Assignment of Students to Professors”
examines students and professors at the U.S. Air Force Academy from fall
1997 to spring 2007 to try to measure the quality of instruction.
The Air Force Academy was selected because its
curricular structure avoids many of the pitfalls of traditional evaluation
methods, according to the report. Because students at the Air Force Academy
are randomly assigned to sections of core courses, there is no threat of the
sort of “self-selection” in which students might choose to study with easier
or tougher professors. “Self-selection,” the report notes, makes it
difficult to measure the impact professors have on student achievement
because “if better students tend to select better professors, then it is
difficult to statistically separate the teacher effects from the selection
effects.”
Also, professors at the academy use the same
syllabus and give similar exams at about the same time. In the math
department, grading is done collectively by professors, where each professor
grades certain questions for all students in the course, which cuts down on
the subjectivity of grading, according to the report. The students are
required to take a common set of “follow-on” courses as well, in which they
are also randomly assigned to professors.
The authors acknowledge that situating the study at
the Air Force Academy may also raise questions of the “generalizability” of
the study, given the institution’s unusual student body. “Despite the
military setting, much about USAFA is comparable to broader academia,” the
report asserts. It offers degrees in fields roughly similar to those of a
liberal arts college, and because students are drawn from every
Congressional district, they are geographically representative, the report
says.
Carrell, an assistant professor economics at the
University of California at Davis, attended the academy as an undergraduate
and the University of Florida as a grad student, and has taught at Dartmouth
as well as the Air Force Academy and Davis. “All students learn the same,”
he said.
For math and science courses, students taking
courses from professors with a higher “academic rank, teaching experience,
and terminal degree status” tended to perform worse in the “contemporaneous”
course but better in the “follow-on” courses, according to the report. This
is consistent, the report asserts, with recent findings that students taught
by “less academically qualified instructors” may become interested in
pursuing further study in particular academic areas because they earn good
grades in the initial courses, but then go on to perform poorly in later
courses that depend on the knowledge gained from the initial courses.
In humanities, the report found no such link.
Carrell had a few possible explanations for why no
such link existed in humanities courses. One is because professors have more
“latitude” in how they grade, especially with essays. Another reason could
be that later courses in humanities don’t build on earlier classes like
science and math do.
One of the major points of the study was its look
at the effectiveness of student evaluations. Although the evaluations can
accurately predict the performance of the student in the “contemporaneous”
course — the course in which the professor teaches the student — they are
“very poor” predictors of the performance of a professor’s students in
later, follow-up courses. Because many universities use student evaluations
as a factor in decisions of promotion and tenure, this “draws into question
how one should measure professor quality,” according to the report.
“It appears students reward getting higher grades,”
Carrell said
Partly because he was
fed up with childish comments on Web sites where students rate their
professors, a business-school professor at Temple University has created an
online forum for students who want to sound off. So as not to mislead
students, the site’s title suggests its intent: “Thank You Professor.”
“There are so many vehicles for students to express
their opinion,” says the site’s creator,
Samuel
D. Hodge Jr., chairman of the business school’s
legal-studies department. “But there’s nothing really at the school where
the professor can get a letter directly from the student.”
When the site went live on May 1, Mr. Hodge says,
he expected about a dozen comments in the first week. Instead, more than 200
flooded in. He converts each note into a letter to the faculty member being
praised, then makes sure the business school’s dean gets a copy.
Mr. Hodge moderates the comments, but so far there
haven’t been any negative posts on
the site,
he says.
For example, the four “thank you notes” left on the
site so far for
Rob B.
Drennan Jr., an associate professor of risk,
insurance, and health-care management, have been uniformly laudatory (three
were signed, and one was anonymous). “I truly enjoyed his class,” wrote one
student, Tom Coia. “Difficult and challenging, but isn’t that what we want
from school?” Contrast that to an anonymous comment concerning Mr. Drennan
that a student left last spring on
RateMyProfessors.com: “BOOOOO!!!!!”
Mr. Hodge, incidentally,
has appeared on an MTV
Web site of faculty members who “strike back” against comments on
RateMyProfessors.com. He says Ohio State University is the only other
institution he knows of that gives students a way to thank their professors
on the Web.
Temple may extend the site to the whole university,
he says: “It’s such positive reinforcement."
Also see
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor
"On Great Teachers and the Remarkable Life: A Deliberate Practice Case
Study," Study Hacks, February 10, 2010 ---
Click Here
Predicting Greatness
The impact of teachers is profound. If you rank the
world’s countries by their students’ academic performance, the US is
somewhere in the middle. In a
2009 New Yorker article,
Malcolm Gladwell notes that replacing “the bottom six percent to ten percent
of public-school teachers with teachers of average quality” could be enough
to close the gap between our current position and the top ranked countries.
“[Y]our child is actually better off in a ‘bad’
school with an excellent teacher than in an excellent school with a bad
teacher,” Gladwell concludes.
But there’s a problem: “No one knows what a person
with the potential to be a great teacher looks like.”
Or at least, according to Gladwell.
Teach for America, a non-profit that
recruits outstanding college graduates to teach in low-income school
districts, disagrees. This organization is fanatical about data. For the
past 20 years, they’ve gathered massive amounts of statistics on their
teachers in an attempt to figure out why some succeed in the classroom and
some fail. They then work backwards from these results to identify what
traits best predict a potential recruit’s success.
As Amanda Ripley reports in a
comprehensive look inside the Teach For America process,
published in the Atlantic Monthly, the
results of this outcome-based approach to hiring are “humbling.”
“I came into this with a bunch of theories,” the
former head of admissions at Teach for America told Ripley. “I was
proven wrong at least as many times as I was validated.”
When Teach for America first started 20
years ago, applicants were subjectively scored by interviewers on 12 general
traits, like “communication” ability. (A sample interview question: “What is
wind?”) By contrast, if you were one of the 35,000 students who applied in
2009 (a pool that included 11% of Ivy League seniors), 30 data points,
gathered from a combination of questionnaires, demonstrations, and
interviews were fed into a detailed quantitative model that returned a
hiring recommendation.
This data-driven approach seems to work. As Ripley
reports, in 2007, 24% of Teach for America teachers advanced their
students at least one and a half grade levels or more. Two years later, as
the organization’s models continued to evolve, this number has almost
doubled to 44%.
I’m fascinated by Teach For America for a
simple reason: the traits they
discovered at the core of great teaching are unmistakably a variant of
deliberate practice — not the pure,
coach-driven practice of professional athletes and chess grandmasters, but a
hearty, adaptable strain that’s applicable to almost any field.
Put another way, these outstanding teachers may
have unwittingly
cracked the code for generating a remarkable life…
Inside the Classroom of an Outstanding
Teacher
In her Atlantic piece, Ripley recounts an
afternoon spent in the math classroom of William Taylor, a teacher in
southeast Washington D.C. who ranks in the top 5% of all math teachers in
the district.
When Taylor enters the classroom his students fall
into a strictly-choreographed interaction.
“Good morning,” he calls. “Good morning!” the
students answer.
The period begins with Mental Math. Taylor calls
out problems which the students answer in their heads. They then write their
solutions on orange index cards which they all hold up at the same time.
“If some kids get it wrong, they have not
embarrassed themselves,” Ripley notes. But Taylor now knows who needs more
attention.
After Mental Math, Taylor teaches the class a new
method for long division. The students try the strategy in groups of four,
each led by a “team leader” that rotates on a regular basis. (Taylor found
that students were more receptive to help from their fellow students.) After
having the students try the method on their own, Taylor begins calling them
up to the board, selecting names at random to ensure no one is overlooked.
“I try, but I can’t find a child who isn’t talking
about math,” Ripley recalls about her afternoon in the classroom,
The class continues with a spirited game of
Multiplication Bingo. Before the students leave, they have to answer a final
problem on a slip of paper that they hand to Taylor at the door — another
method for him to assess who is still struggling with the day’s material.
What Makes Great Teachers Great?
“Strong teachers insist that effective teaching is
neither mysterious nor magical,” says Ripley. “It is neither a function of
dynamic personality nor dramatic performance.”
Instead, Teach for America has identified
the following traits as the most important for high-performing teachers such
as Taylor:
- They set big goals for their students
and are perpetually looking for ways to improve their effectiveness.
(In the Atlantic article, Teach for America’s in-house
professor, Steve Farr, noted that when he sets up visits with superstar
teachers they often say something like: “You’re welcome to come, but I
have to warn you — I am in the middle of just blowing up my classroom
structure…because I think it’s not working as well as it could.” )
- They’re obsessed about focusing every
minute of classroom time toward student learning.
- They plan exhaustively and
purposefully, “working backward from the desired outcome.”
- They work “relentlessly”…”refusing to
surrender.”
- They keep students and their families
involved in the process.
An expert quoted in the article summarized the
findings: “At the end of the day…it’s the mind-set that teachers
need — a kind of relentless approach to the problem.”
The first four traits above should sound familiar.
Setting big goals, working backwards from results to process, perpetually
trying to improve, relentless focus — these sound a lot like
the traits of deliberate practice.
Indeed, when selecting teachers for their program,
Teach for America’s complex recruiting model identifies graduates who
show evidence of having mastered this skill. Two effective predictors of a
recruit’s classroom success, for example, are improving a GPA from low to
high and demonstrating meaningful “leadership achievement.” That is,
improving a 2.0 to a 4.0 is more important then maintaining a 4.0, and
doubling a club’s membership is more important than simply being elected
president. Teach for America wants signs that you can take a
difficult goal and then find a way to make it happen.
A Different Kind of Deliberate Practice
A recent article in the Wall Street Journal
estimated that it takes around 500,000 hours of deliberate practice for an
NFL team to make it through a season. To put that in perspective, that’s
about 32 hours of hard work for each foot the ball moves down the
field. This effort, of course, is carefully controlled and coached — for
example, the article quotes the Colt’s defensive end, Keyunta Dawson,
talking about the intense training needed to make split second decisions
based on subtle positioning of the head or foot of the opposing lineman.
“I thought college was a grind,” said Dawson. “But
this is a job.”
When we think about deliberate practice, we tend to
think about examples like Dawson, or chess grandmasters, or piano virtuosos
being painstakingly coached through a difficult, but well-established, path
to mastery.
The examples of this process playing out in
classrooms, however, have a different feel. William Taylor doesn’t have a
coach or decades of well-established training methodology to draw on.
His approach is more free-form. He started
with a clear goal — when he presented a concept, he wanted every
student to understand it — and then became obsessed with its achievement.
His Mental Math exercise, his random selection of students to do problems at
the board, the “exit slips” he collected at the end of the period — these
activities evolved from a drive to constantly assess his classes’
comprehension.
Over time, the extraneous was excised from his
classroom schedule (he developed hand signals for the students to use to
indicate a need for the bathroom — a way to eliminate the wasted time and
distraction of calling on them). He exhaustively plans his lessons, and then
ruthlessly culls or modifies any piece that isn’t effective.
“I found that the kids were not hard…[i]t was
explaining the information to them that was hard,” Taylor recalls about his
first year. He kept working until he cracked that hard puzzle.
Freestyle Deliberate Practice
Here are the main components of Taylor’s approach
to deliberate practice:
- Build an obsession with a
clear goal.
- Work backwards from the goal
to plan your attack.
- Expend hard focus toward this
goal every day.
- Ruthlessly evaluate and modify
your approach to remove what doesn’t work and improve what does.
Let’s call this approach freestyle
deliberate practice to differentiate it from the more
structured strain written about in the research literature. Here’s my
argument: for most fields, freestyle deliberate practice is the key
to building a rare and valuable skill.
Continued in article
Bob Jensen's threads on tools and tricks of the trade are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm
Bob Jensen's threads on critical thinking, including "beyond critical
thinking" --
http://faculty.trinity.edu/rjensen/265wp.htm
The Cambridge Handbook of Thinking and Reasoning
---
Click Here
The Miniature Guide To Critical Thinking Concepts & Tools ---
Click Here
Master Teachers Who Deliver Exceptional Courses
But Have Little Contact With Individual Students
Before reading this section, you should be familiar with the document at
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Master teachers can also be outstanding researchers, although research is
certainly not a requisite to being a master teacher. Some, not many, master
teachers also win awards for leading empirical and analytical research. I've
already mentioned Bill Beaver and Mary Barth at Stanford University. One
common characteristic is exceptional preparation for each class coupled with
life experiences to draw upon when fielding student questions. These life
experiences often come from the real world of business apart from the more
narrow worlds of mathematical modeling where these professors are also
renowned researchers.
Frequently master teachers teach via cases and are also known as
exceptional case-method researchers and writers of cases. The Harvard
Business School every year has some leading professors who are widely known
as master teachers and master researchers. Michael Porter may become one of
Harvard's all time legends. Some of the current leading master teachers at
Harvard and elsewhere who consistently stand head and shoulders above their
colleagues are listed at
http://rakeshkhurana.typepad.com/rakesh_khuranas_weblog/2005/12/index.html
Some of the all-time great case teachers were not noted researchers or
gifted case writers. Master case teachers are generally gifted
actors/actresses with carefully prepared scripts and even case
choreographies in terms of how and were to stand in front of and among the
class. The scripts are highly adaptable to most any conceivable question or
answer given by a student at any point in the case analysis.
Most master case teachers get psyched up for each class. One of Harvard's
all time great case teachers,
C. Roland (Chris) Christensen, admitted after years of teaching to still
throwing up in the men's room before entering the classroom.
In some of these top case-method schools like the Harvard Business School
and Darden (University of Virginia) have very large classes. Master teachers
in those instances cannot become really close with each and every student
they educate and inspire.
Some widely noted case researchers and writers are not especially good in
the classroom. In fact I've known several who are considered poor teachers
that students avoided whenever possible even thought their cases are popular
worldwide.
Open-Door Master Teachers Who Have Exceptional One-On-One Relations
With Students
Not all master teachers are particularly outstanding in the classroom.
Two women colleagues in my lifetime stand out as open-door master teachers who were
prepared in class and good teachers but were/are not necessarily exceptional
in classroom performances. What made them masters teachers is exceptional
one-on-one relations with students outside the classroom. These master
teachers were exceptional teachers in their offices and virtually had open
door policies each and every day. Both Alice Nichols at Florida State
University and Petrea Sandlin at Trinity University got to know each student
and even some students' parents very closely. Many open-door master
teachers' former students rank them at the very top of all the teachers they
ever had in college. Many students elected to major in accounting because
these two women became such important parts of their lives in college.
But not all these open-door master teachers are promoted and well-paid by
their universities. They often have neither the time nor aptitude for
research and publishing in top academic journals. Sometimes the university
bends over backwards to grant them tenure but then locks them in at
low-paying associate ranks with lots of back patting and departmental or
campus-wide teaching awards. Some open-door master teachers never attain the
rank and prestige of full professor because they did not do enough research
and writing to pass the promotion hurdles. Most open-door master teachers find their
rewards in relations with their students rather than relations with their
colleges.
Sometimes master teachers teach content extremely well without
necessarily being noted for the extent of coverage. On occasion they may
skip very lightly over some of the most difficult parts of the textbooks
such as the parts dealing with FAS 133, IAS 39, and FIN 46. Sometimes the
most difficult topics to learn make students frustrated with the course and
the instructor who nevertheless makes them learn those most difficult topics
even when the textbook coverage is superficial and outside technical
learning material has to be brought into the course. Less popular teachers
are sometimes despised taskmasters.
Your question initially was to comment on the relation between teaching and
research. In most instances research at some point in time led to virtually
everything we teach. In the long-run research thus becomes the foundation of
teaching. In the case of accounting education this research is based heavily on
normative and case method research. Many, probably most,
accountics researchers are not outstanding teachers of undergraduate
accounting unless they truly take the time for both preparation and student
interactions.
New education technologies may especially help these researchers teach better.
For example, adding video such as the BYU variable speed video described above
may replace bad lecturing in live classes with great video learning modules.
Similarly, master teachers and master educators are sometimes reputed researchers, but this is
probably the exception rather than the rule. Researchers have trouble finding
the time for great class preparation and open-door access.
And lastly,
accountics researchers research in accounting has not been especially
noteworthy, apart from case-method research, in providing great teaching
material for our undergraduate and masters-level courses. If it was noteworthy
it would have at least been replicated ---
http://faculty.trinity.edu/rjensen/theory01.htm#Replication
If it was noteworthy for textbooks and teaching, practitioners would be at least
interested in some of it as well ---
http://faculty.trinity.edu/rjensen/theory01.htm#AcademicsVersusProfession
"‘Too Good’ for Tenure?" by Alison Wunderland (pseudonym), Inside Higher
Ed, October 26, 2007 ---
http://www.insidehighered.com/views/2007/10/26/wunderland
But what most small colleges won’t tell you — not
even in the fine print — is that teaching and students often really don’t
come first. And for the professors, they can’t. Once upon a time teaching
colleges taught and research institutions researched. But these days, with
the market for students competitive, and teaching schools scrambling for
recognition, they have shifted their priorities. Now they market what is
measurable — not good teaching, but big names and publications. They look to
hire new faculty from top research universities who will embellish the
faculty roster and bring attention to the school by publishing. And they can
do this, because even job candidates who don’t really want to be at places
like Rural College (although it is ranked quite well) are grateful to get a
tenure-track position.
And here is where the problem is compounded. Small
schools want books instead of teaching; and many new faculty — even the
mediocre scholars — want to publish instead of teach. In the new small
college, both win. Everyone looks the other way while courses are neglected
for the sake of publications. What few devoted teachers will admit — because
to do so would be impolitic — is that it is impossible to teach a 4-4 or
even a 3-3 load effectively and publish a book pre-tenure without working
“too hard.” What’s more, when you suggest that a small teaching college
should prioritize teaching over publishing, what your colleagues hear you
say is, “I am not good enough to publish.”
Sadly, many of the students also think they win in
this scenario. They get good grades with little work. Once a culture like
this is established, a new faculty member who is serious about teaching
rocks the boat. And if she still somehow manages to excel in all the other
required areas, she might be sunk. Unfortunately for the small schools, the
best solution for her might be to jump ship.
"Teaching Professors to Be More Effective Teachers," Elizabeth Redden,
Inside Higher Ed, October 31, 2007 ---
http://www.insidehighered.com/news/2007/10/31/ballstate
David W. Concepción, an associate professor of
philosophy, came to the first workshop series in 2003 wondering why
“students in courses for some number of years said, ‘I get nothing out of
the reading’” (specifically the primary philosophy texts). Discovering
through student focus groups that what they meant was that they couldn’t
ascertain the main points, Concepción realized that he needed to explain the
dialogical nature of philosophy texts to students in his 40-person
introductory philosophy course.
Whereas high school texts tend to be linear and
students read them with the objective of highlighting facts paragraph by
paragraph that they could be tested on, “Primary philosophical texts are
dialogical. Which is to say an author will present an idea, present a
criticism of that idea, rebut the criticism to support the idea, maybe
consider a rejoinder to the rebuttal of the criticism, and then show why the
rejoinder doesn’t work and then get on to the second point,” Concepción
says.
“If you are reading philosophy and you’re assuming
it’s linear and you’re looking for facts, you’re going to be horribly,
horribly frustrated.”
Out of the workshop, Concepción designed an initial
pedagogical plan, which he ran by fellow workshop participants, fellow
philosophy faculty, junior and senior philosophy majors, and freshmen
philosophy students for feedback. He developed a “how-to” document for
reading philosophy texts (included in a December 2004 article he published
in Teaching Philosophy, “Reading Philosophy with Background Knowledge and
Metacognition,” which won the American Association of Philosophy Teachers’
Mark Lenssen Prize for scholarship on the instruction of philosophy).
Based on the constructivist theory of learning
suggesting that students make sense of new information by joining it with
information they already have, his guidelines suggest that students begin
with a quick pre-read, in which they underline words they don’t know but
don’t stop reading until they reach the end. They then would follow up with
a more careful read in which they look up definitions, write notes
summarizing an author’s argument into their own words on a separate piece of
paper, and make notations in the margins such that if they were to return to
the reading one week later they could figure out in 15 seconds what the text
says (a process Concepción calls “flagging).
Concepción also designed a series of assignments in
which his introductory students are trained in the method of reading
philosophy texts. They are asked to summarize and evaluate a paragraph-long
argument before and after learning the guidelines (and then write a report
about their different approaches to the exercise before and after getting
the “how-to” document on reading philosophy), turn in a photocopy of an
article with their notations, and summarize that same article in writing.
They participate in a class discussion in which they present the top five
most important things about reading philosophy and face short-answer
questions on the midterm about reading strategies (after that, Concepción
says, students are expected to apply the knowledge they’ve learned on their
own, without further direct evaluation).
The extra reading instruction has proven most
beneficial for the weakest students, Concepción says — suggesting that the
high-performing students generally already have the advanced reading skills
that lower performers do not.
“What happened in terms of grade distribution in my
classes is that the bottom of the curve pushed up. So the number of Fs went
down to zero one semester, the Ds went down and the Cs stayed about the same
in the sense that some of the former C performers got themselves in the B
range and the Fs and the Ds got themselves in the C range. There was no
difference in the A range, and not much difference in the B range.”
Meanwhile, in his weekly, 90-person lecture class
on World Mythology, William Magrath, a full professor of classics, also saw
significant drops in the number of Fs after developing targeted group work
to attack a pressing problem: About a quarter of freshmen had been failing.
“I had been keeping very close records on student
performance over the semester for the previous five or six years and noticed
that there was a pattern wherein a lot of the freshmen were having real
difficulty with the course. But it wasn’t so much that they weren’t
performing on the instruments that they were given but rather that they
weren’t taking the quizzes or weren’t taking the tests or weren’t getting
the assignments in,” Magrath says.
Discovering that he could predict final grades
based on student performance in just the first four weeks of class with
remarkable accuracy, he divided the freshmen into groups based on their
projected grades: the A/Bs, B/Cs and Ds/Fs (No – he didn’t call them by
those names, but instead gave the groups more innocuous titles like “The
Panthers.”)
Meeting with each set of students once every three
weeks for one hour before class, he gave the A/Bs a series of supplemental
assignments designed to challenge them. For instance, he would give them a
myth on a particular theme and ask them to find three other myths connected
to that theme for a group discussion. Meanwhile, the Ds/Fs took a more
structured, step-by-step approach, completing readings together and
discussing basic questions like, “How do you approach a story, what do you
look for when you face a story, how would you apply this theory to a story?”
Meanwhile, Magrath says, the B/C students didn’t
complete supplemental reading, but were instead expected to post questions
about the readings or lectures that he would answer on the electronic class
bulletin board – with the idea that they would remain engaged and involved
in class.
In the end, Magrath found the smallest difference
for B/C students. But the overall average of students climbed from 1.9 in
1999-2002, before the group work was put in place, to 2.4 in 2003-5. Of all
the Fs he gave, the percentage given to freshmen (as opposed to
upperclassmen in the class, who did not participate in the group work) fell
from 63 to 11 percent.
When, in 2006, Magrath stopped conducting the group
work in order to see what the effect might be, performance returned to
earlier levels.
“The dynamic of this class is a large lecture class
with the lights dimmed at night on Thursdays once a week. The kids feel
anonymous almost right away. That anonymity gets broken by virtue of being
with me,” Magrath says. He adds that while he has also replicated the group
work format in the spring semester, the results weren’t as dramatic —
suggesting, he says, that freshman fall is the critical time to get students
on track.
“If what [first-semester freshmen] are experiencing
in the classroom isn’t accommodating for them, they don’t know what to do.
They genuinely don’t know what to do,” he says.
As for steps forward, Ranieri, the leader of the
initiative, says that the Lumina grant – which included funds for faculty
stipends of $2,400 the first year and $2,000 in subsequent years (faculty
who participated in the first two years continued to participate in
workshops and receive funding through the end of the three-year cycle) — has
been exhausted. However, he hopes to expand a report he’s writing — which
tracks retention and GPA data for students who enrolled in the “Lumina”
courses as freshmen throughout their college careers — for publication.
So far, Ranieri says, the various professors
involved have given 13 national or international presentations and produced
four peer-reviewed publications.
“One of the biggest problems you have in higher
education,” he says, “is allowing faculty members to be rewarded for this
kind of work.”
October 30, 2007 reply from Linda A Kidwell
[lkidwell@UWYO.EDU]
There was an article in the Smith College Alumnae
Magazine several years ago about one of my favorite professors at Smith,
Randy Bartlett in economics. My second semester of senior year, I was done
with all my required courses and swore I would not take another 8:00 class,
but one of my friends told me to give his 8am Urban Economics class a try.
He opened class that first day by reading Carl Sandberg's poem Chicago, and
I was hooked -- back into an unnecessary 8 o'clock class by choice! And he
was indeed a wonderful teacher. He read that poem again after a semester of
urban econ, and it took on a whole new meaning.
Although I was unaware of his research activities
at the time, the article I mentioned contained this wonderful quote I have
kept on my wall since then:
"I carry out the research and publish because it
keeps my mind lively. I can't ask my students to take on hard work without
my doing the same."
When I wonder about the significance of my
contributions to the field, I read that quote.
For those who don't know the poem, here it is:
CHICAGO
HOG Butcher for
the World, |
|
Tool Maker, Stacker of Wheat, |
|
Player with Railroads and the
Nation’s Freight Handler; |
|
Stormy, husky, brawling, |
|
City of the Big Shoulders: |
5 |
|
They tell me you are wicked and I
believe them, for I have seen your painted women under the gas
lamps luring the farm boys. |
|
And they tell me you are crooked and I
answer: Yes, it is true I have seen the gunman kill and go free
to kill again. |
|
And they tell me you are brutal and my
reply is: On the faces of women and children I have seen the
marks of wanton hunger. |
|
And having answered so I turn once
more to those who sneer at this my city, and I give them back
the sneer and say to them: |
|
Come and show me another city with
lifted head singing so proud to be alive and coarse and strong
and cunning. |
10 |
Flinging magnetic curses amid the toil
of piling job on job, here is a tall bold slugger set vivid
against the little soft cities; |
|
Fierce as a dog with tongue lapping
for action, cunning as a savage pitted against the wilderness, |
|
Bareheaded, |
|
Shoveling, |
|
Wrecking, |
15 |
Planning, |
|
Building, breaking, rebuilding, |
|
Under the smoke, dust all over his
mouth, laughing with white teeth, |
|
Under the terrible burden of destiny
laughing as a young man laughs, |
|
Laughing even as an ignorant fighter
laughs who has never lost a battle, |
20 |
Bragging and laughing that under his
wrist is the pulse. and under his ribs the heart of the people, |
|
Laughing! |
|
Laughing the stormy, husky, brawling
laughter of Youth, half-naked, sweating, proud to be Hog
Butcher, Tool Maker, Stacker of Wheat, Player with Railroads and
Freight Handler to the Nation. |
Carl Sandberg 1916
Linda Kidwell University of Wyoming
October 30, 2007 reply from Patricia Doherty
[pdoherty@BU.EDU]
You know, Linda, somehow your post brought to my
mind something from my own undergraduate days at Duquesne University. I was
a Liberal Arts student, and had to take, among other things, 4 semesters of
history. I came into it dreading it - I'd hated history in high school - all
memorization and outlining of chapters. The first college semester was no
improvement - an auditorium lecture with hundreds of students, a professor
lecturing for 50 minutes, and a TA taking attendance. Then came the second
semester. I looked for, and found, a smaller class. The professor (whose
name escapes me right now) was a "church historian," researching history
from the viewpoint of world religions. He began the first class by reading
an excerpt from Will Cuppy's "The Decline and Fall of Practically
Everybody." Had us rolling in the aisles. He kept at it the whole term,
interspersing history with Cuppy readings and anecdotes from actual history.
I loved that class.
And Will Cuppy is on my shelf to this day. And that
professor awakened in me a love of history. I read history, historical
novels, watch history films (fiction and non) to this day. All because one
professor thought history was a living thing, not a dead timeline, and
managed to convey that to a bunch of jaded sophomores.
p
"Should We Teach Broadly or Deeply?" the Unknown Professor Who
Maintains the Financial Rounds Blog, February 13, 2008 ---
http://financialrounds.blogspot.com/
I just came across a talk by Robert Frank, author
of "The
Economic Naturalist." He talked about his book,
and about the problem of why so many students don't retain key concepts from
their classes. For example, on the first day of my security analysis class I
typically ask students the question "What should determine the value of a
security." The answer, of course, is "The amount, timing, and riskiness of
the cash flows from owning the security." Fewer than 1/4 of the class knows
the answer without prompting.
Frank's explanation for why students retain so little is that we simply try
pack too much into our classes. This makes our syllabus seem impressive, but
shortchanges the students. As an example, we might cover 14 chapters (and
15-20 concepts) in a 14 week semester, rather than covering half that many
and really drilling the concepts in.
"But Unknown Professor", you say, "We HAVE to cover A, B, C, D, E, F, G, H,
I, J, K, L, M, and in the introductory finance class or we're shortchanging
the students." The problem with this approach is that a few months after the
class is done, they don't remember anything about topics A-M except that
they covered them sometime in class. Add they really don;t even have all
that great a grasp of critical concepts such as the Time Value of Money.
In contrast, if you covered half as many topics, you could spend 3-4 weeks
on Time Value rather than the usual week or two. This way, you could make
them do about a hundred or so problems, and they'd really have it locked
down.
Continued in article
Jensen Comment
The problem is that many courses are prerequisites to other courses or
certification examinations. Course content may also be specified by a curriculum
committee of some type. To this you may have to add the problem of explaining to
students why your course covers only 10%-20% on the textbook covered much more
comprehensively by many other colleges. My point is that course coverage is not
often the sole domain of the course instructor.
To this we also must address the purpose of a course. Is it to inspire a
student to learn more or is it mastery of content. When we teach broadly we are
often trying to inspire students to pursue topics later in life. When we teach
deeply we've opted for mastery of content. Many educators think inspiration
trumps mastery. In fact mastery learning often leads to burnout of both
instructors and students.
Differences between "popular teacher" versus "master teacher" versus "mastery
learning" versus "master educator" ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Bob Jensen's threads on Cognitive Processes and Artificial Intelligence
are at
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#CognitiveProcesses
Question
What is mastery learning?
April 24, 2006 message from Lim Teoh
[bsx302@COVENTRY.AC.UK]
I am a Malaysian but currently teaching in the UK.
Please forgive me if I failed to express myself clearly in English.
I just joined the discussion list months ago and
found a lot of useful information for both my research and teaching career
development. My sincere thanks to AECM.
As I plan to start my PhD study by end of this
year, I would like to ask for your help to get some references to my
research topic. I am interested in mastery learning theory and programmed
instruction; I'll research into the application of these theories to
accounting education. I aim to explore how the accounting knowledge can be
disseminated or transferred more effectively to a large group of students.
Are there any useful databases or websites that
could help me to start with this PhD reseach? Is this research topic
outdated or inappropriate for me to proceed further?
Looking forward to receiving your advice and
guidance.
Kind regards,
Lim
Coventry University United Kingdom
April 24, 2006 reply from Bob Jensen
Hi Lim,
Here are some possible links that might help:
Differences between "popular teacher" versus "master teacher" versus
"mastery learning" versus "master educator" ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching
Also see “Mastery Learning” by
http://www.humboldt.edu/~tha1/mastery.html
This provides references to the classical literature on learning theory by
Benjamin Bloom.
One of the most extensive accounting education experiments with mastery
learning took place under an Accounting Education Change Commission Grant at
Kansas State University. I don't think the experiment was an overwhelming
success and, to my knowledge, has not been implemented in other accounting
programs:
http://aaahq.org/facdev/aecc.htm
http://aaahq.org/AECC/changegrant/cover.htm
To find a comprehensive list of references, feed in “Benjamin Bloom” and
“Learning” terms into the following links:
Google Scholar ---
http://scholar.google.com/advanced_scholar_search?hl=en&lr=
Windows Live Academic ---
http://academic.live.com/
Google Advanced Search ---
http://www.google.com/advanced_search?hl=en
You might also be interested in metacognitive learning ---
http://faculty.trinity.edu/rjensen/265wp.htm
You can also read about asynchronous learning at
http://faculty.trinity.edu/rjensen/255wp.htm
October 14, 2005 message from David Albrecht
[albrecht@PROFALBRECHT.COM]
I've encountered something interesting. Two
Ph.D. students in Communication contacted me about visiting one or more
classes in one of the courses I teach.. Their assignment is to study
what a master teacher does. Apparently a list of "master teachers" is
kept at BGSU, and my name is on it. Well, they visited a class today,
again, and then they interviewed me about teaching.
I think this is a great idea in general.
Although I probably would not have adequately appreciated it when I was
a "wet behind the ears" Ph.D. student, I think it is a good way to get
future professors to think about the craft of thinking. Would something
like this be valuable in an accounting Ph.D. program?
BTW, I have no idea how my name got on that
list. I don't recall bribing anyone.
David Albrecht
October 14, 2005 reply from Bob Jensen
Hi David,
Congratulations on being singled out on your
campus as a "master teacher."
Your message prompted me to think about the
difference between "popular teacher" versus "master teacher" versus
"mastery learning" versus "master educator."
Master teacher and master educator are not a well defined terms.
However "mastery learning" is well defined since the early works of
Benjamin Bloom. It generally entails mastery of learning objectives of
outside (curriculum) standards that often apply to multiple instructors.
Mastery learning can be accomplished with the aid of master teachers or
with no "live" teachers at all. In the ideal case, students must do a
lot of intense learning on their own. See
http://www.humboldt.edu/~tha1/mastery.html
One of the most interesting mastery learning graduate accounting
programs is Western Canada's Chartered Accountancy (Graduate) School of
Business (CASB) ---
http://www.casb.com/
My friend Don Carter gave me an opportunity to consult in a review of
this program several years ago. Courses are heavily "taught" via
distance education and mastery learning objectives. It's one of the
toughest graduate accounting programs that I've ever witnessed. Students
truly master course objectives by a variety of processes.
Master teaching can be a bundle of many things. One usually thinks of
an outstanding lecturer who also is an inspirational speaker. However, a
master teacher may also be lousy at giving lectures but have fantastic
one-on-one teaching dedication and talents. Three such teachers come to
my mind in my nearly four decades of being a faculty member in four
different universities.
The gray zone is where the teacher is a lousy lecturer and has poor
oral communication skills in any environment. Can that teacher be a
master teacher simply because he/she developed exceptional learning
materials and possibly learning aids such as clever software/games,
brilliant course content, and/or unbending standards that lead virtually
the entire class to succeed at mastery learning of tough content?
I guess my question is whether a master teacher is defined in terms
of mastery (or exceptional) learning versus exceptional motivation for
lifelong learning and a dedicated career choice?
Anecdotally, I have been truly inspired by good lecturers in courses
where I didn't learn a whole lot but wanted afterwards to learn much
more. I have also worked by butt off in some hard courses where I did
most of the learning on my own because the teacher didn't teach well but
made sure I learned the material. I guess both kinds of teachers are
important along the way. I learned to appreciate the latter kind of
teacher more after I graduated.
The really hard thing to separate in practice is popular teaching
versus master teaching. I like to think of master teaching as leading to
mastery learning, but this is not a rigorous definition of master
teaching. If half the class flunks, then the teacher cannot be
considered a master teacher in a mastery learning environment.
There is one possible definition of a "popular teacher." A popular
teacher might be defined as one who gets perfect teaching evaluations
from students independently of grading outcomes, including perfect
teaching evaluations from virtually anybody she flunks. Petrea Sandlin
at Trinity University has that skill. Implicitly this means that such a
teacher has convinced students that they are entirely responsible for
their own successes or failures. But if half the class flunks without
blaming the teacher, can the teacher be considered a popular teacher but
not a master teacher? (By the way, Petrea's passing rates are much
higher and I consider her to be a master teacher as well as a popular
teacher. This was duly recognized when she won an all-university
teaching award of $5,000.)
Perhaps what we really need is a more precise distinction between
"master teacher" versus "master educator." A master teacher brings
students into the profession, and a master educator makes sure they
ultimately qualify to enter into and remain in the profession.
In any case, congratulations David! I hope you
are a master teacher and a master educator.
Bob Jensen
October 15, 2005 reply from Mooney, Kate
[kkmooney@STCLOUDSTATE.EDU]
I'm detecting a subtle thread here--a master
teacher can get everyone to pass. Can't agree with that, especially at a
public, state school that isn't the flagship institution in the state.
Sometimes all the teaching and studying in the world won't be successful
because the brainpower isn't there. In that situation, I believe the
master teacher constructs the course and teaches in such a way that the
students who can be successful in the major/profession get through the
filter. Those folks in the filter course need to be master teachers AND
courageous. (Note: I don't teach the filter course but wholeheartedly
support the guy who does.)
Our pre-business advising group often wishes
for a sorting hat like that in the Harry Potter books to eliminate the
pain of failing in the first intermediate accounting course.
Just another lurker muddying an otherwise crisp
discussion,
K
October 15, 2005 reply from Bob Jensen
Hi Kate,
You make a very good point. Perhaps we can work toward a definition
of master teacher as one who draws out every bit of brain power that is
there even though there may not be enough brain power and whatever else
it takes for mastery learning or even what it takes to pass a course by
the teacher's own standards.
I might note that most college courses are not mastery learning
courses. If the instructor both teaches the course and sets the
standards, the standards may vary from instructor to instructor even
when they teach virtually the same course. Some instructors set lower
standards in an effort to instill confidence and keep troubled students
from giving up entirely. Other instructors set high standards because of
their allegiance external criteria. For example, some might view it as
unethical to hold out promise that all students can become engineers,
CPAs, medical doctors, or computer scientists. Maximal effort on the
part of some students just will not cut it later on.
Mastery learning by definition implies some type of external
standards imposed upon all instructors teaching virtually the same
course. Professional certification examinations (e.g., medical
examinations, bar exams, and CPA examinations) often dictate many of the
mastery learning standards in professional studies.
Many college professors despise mastery learning because they feel it
converges on training (albeit tough training) as opposed to education
(where learning how to learn is deemed paramount).
I'm still troubled by the definition of a master teacher. I don't
think there is a single definition, although any definition must weigh
heavily upon instilling a motivation to learn. You are correct, Kate, in
pointing out that motivation alone is not enough for some students.
There probably is no threshold level (such as 60%) of passage rate in
the definition of a master teacher.
I'm less troubled by a definition of a master educator. I don't think
there is a single definition, but I do think that the criterion of
motivation weighs less heavily than dedication to external (mastery)
standards and exceptional skills is preparing students to meet mastery
standards. Here there is also no threshold passage rate, but the
expectation might be lower than for a master teacher because the
standards might be set higher by the master educator. One would only
hope so in the final years of studies to become a brain surgeon.
Bob Jensen
October 16, 2005 reply from Stokes, Len
[stokes@SIENA.EDU]
I feel it takes as much effort from a student to
get an "F" as an "A" just in the opposite direction. Having said that I
think it is the teacher who can get "C" brain power to be motivated to do
"B" or better work, or similar things with other students that deserves to
be recognized as the master teacher.
My $.01 worth.
len
October 15, 2005 reply from Roberta Brown Tuskegee
University [RBrown1205@AOL.COM]
This thread reminded me of one of my first
successful grant funding searches when I was working in the Engineering
Division at Tuskegee University. I found a National Science Foundation
funded grant that essentially taught engineering faculty certain
education principles and techniques. Many college faculty get their
teaching position after coming directly from the private sector, where
they worked as mechanical, electrical, etc., engineers, and they did not
take education courses in college. A professor at West Point developed
the course, and offered it through NSF, and an acting engineering dean
at Tuskegee was awarded funding for the program to come to the
University for a number of years.
I am not sure if the program is still ongoing
at Tuskegee (it started in the late 1990's), but I see the program
offering at
http://www.dean.usma.edu/cme/cerc/1996-1997/T4E 1997.htm
I wonder if accounting professors can also
become college faculty directly from the private sector, without
education credits?
Degrees Versus Piecemeal Distance (Online)
Education
"Offering Entire Degrees Online is One Key to Distance
Education, Survey Finds," by Dan Carnevale, The Chronicle of
Higher Education, November 26, 2005, Page A1
The distance-education programs that
offer entire degrees online are more successful than those that offer only a
scattering of courses, a new survey has found.
The report, titled "Achieving
Success in Internet-Supported Learning in Higher Education," was written
by Rob Abel, president of a nonprofit organization called the Alliance for
Higher Education Competitiveness. The report was set to be released this
week.
Mr. Abel says the organization wanted to
find out what made a distance-education program successful and to share the
information with other institutions. The organization surveyed officials
at 21 colleges and universities that it determined to be successful in
distance education. In their responses, college officials highlighted
the need for such common elements as high-quality courses and reliable
technology.
But what struck Mr. Abel as most
important was that 89 percent of the institutions created online degree
programs instead of just individual online courses. Online degree
programs lead to success, he says, because they tend to highlight a college's
overall mission and translate into more institutional support for the faculty
members and students working online.
"It's easier to measure the
progress at a programmatic level," Mr. Abel says. "The
programmatic approach also gets institutions thinking about student-support
services."
Of course, success is subjective, he
says, and what may be deemed successful for one institution may not work at
another.
But he found that some college officials
believe distance education has not lived up to their expectations. He
hopes that some colleges will learn from institutions that have succeeded
online. "These particular institutions didn't see this as a bust at
all," Mr. Abel says. "Maybe that just means that they set
realistic expectations."
SUCCESS STORIES
One of the institutions included in the
report is the University of Florida, which enrolls more than 6,000 students in
its online degree programs. William H. Riffee, associate provost for
distance, continuing, and executive education at the university, says Florida
decided to move forward with a strong distance-education program because so
many students were demanding it.
"We don't have enough seats for the
people who want to be here," Mr. Riffee says. "We have a lot
of people who want to get a University of Florida degree but can't get to
Gainesville."
The university does not put a cap on
enrollments in online courses, he says. Full-time Florida professors
teach the content, and part-time faculty members around the country field some
of the questions from students.
"We have learned how to scale, and
we scale through an addition of faculty," Mr. Riffee says.
"You scale by adding faculty that you have confidence will be able to
facilitate students.
Another college the organization deemed
successful in distance education is Westwood College, a for-profit institution
that has campuses all over the country, in addition to its online degree
programs. Shaun McAlmont, president of Westwood College Online, says
some institutions may have trouble making the transition to online education
because higher education tends to be slow to change.
"How do you introduce this concept
to an industry that is very much steeped in tradition?" he asks.
"You really have to re-learn how you'll deliver that instruction."
Mr. McAlmont, who has also spent time as
an administrator at Stanford University, says non-profit institutions could
learn a lot from for-profit ones when it comes to teaching over the Internet.
Continued in article
Bob Jensen's threads on distance education are at http://faculty.trinity.edu/rjensen/crossborder.htm
You can read more about such matters
at http://faculty.trinity.edu/rjensen/255wp.htm
Also see the Dark Side and other
documents at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm
For threaded audio and email
messages from early pioneers in distance education, go http://faculty.trinity.edu/rjensen/ideasmes.htm
— Steve Foerster Nov 11, 05:52 PM
— Born to teach Nov 11, 06:03 PM