In 2017 my Website was migrated to the clouds and reduced in size.
Hence some links below are broken.
One thing to try if a “www” link is broken is to substitute “faculty” for “www”
For example a broken link
http://www.trinity.edu/rjensen/Pictures.htm
can be changed to corrected link
http://faculty.trinity.edu/rjensen/Pictures.htm
However in some cases files had to be removed to reduce the size of my Website
Contact me at 
rjensen@trinity.edu if you really need to file that is missing

 

 

Bob Jensen's Threads on Assessment

Bob Jensen at Trinity University

George Carlin - Who Really Controls America --- Click Here
"More kids pass tests if we simplify the tests --- Why education will never be fixed."

The Downfall of Lecturing

Measuring Teacher Effectiveness  

Altmetrics of Total-Impact

Coaches Graham and Gazowski

Grade Inflation, Teaching Evaluations and RateMyProfessor

Academic Whores: School Systems into Lowering Standards for Achievement Tests and Graduation

Performance Evaluation and Vegetables

Rubrics in AcademiaAssessing, Without Tests

How to Mislead With Statistics of Merit Scholars:  "Mom, Please Get Me Out of South Dakota!"

The Real Reason Organizations Resist Analytics

The New GMAT

Head Start Programs 

Assessment by Ranking May Be a Bad Idea 

Assessment by Grades May Be a Bad Idea

The Future: Badges of Competency-Based Learning Performance 

Concept Knowledge, Critical Thinking, Competency Testing, and Assessment of Deep Understanding

Tips on Preparing Multiple Choice Examinations  

Onsite Versus Online Differences for Faculty

Online Versus Onsite for Students.

Onsite Versus Online Education (including controls for online examinations and assignments)

Student Engagement

Students Reviewing Each Others' Projects

Online Education Effectiveness and Testing

What Works in Education?

Predictors of Success

Minimum Grades as a School Policy

Team Grading

Too Good to Grade:  How can these students get into doctoral programs and law school if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?

Software for faculty and departmental performance evaluation and management

K-12 School and College Assessment and College Admission Testing

Civil Rights Groups That Favor Standardized Testing

Computer-Based Assessment

Computer Grading of Essays

Outsourcing the Grading of Papers

Assessment in General (including the debate over whether academic research itself should be assessed)

Competency-Based Assessment

Assessment Issues: Measurement and No-Significant-Differences

Dangers of Self Assessment

The Criterion Problem 

Success Stories in Education Technology

Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"

Grade Inflation Versus Teaching Evaluations

Student Evaluations and Learning Styles   

Assessment Takes Center Stage in Online Learning:  The Saga of Western Governors University

Measures of Quality in Internet-Based Distance Learning

Number Watch: How to Lie With Statistics

Drop Out Problems   

On the Dark Side 

Accreditation Issues

Software for Online Examinations and Quizzes

Onsite Versus Online Education (including controls for online examinations and assignments)

The term "electroThenic portfolio," or "ePortfolio," is on everyone's lips.  What does this mean? 

Research Versus Teaching
"Favorite Teacher" Versus "Learned the Most"

Grade Inflation Versus Course Evaluations  

Work Experience Substitutes for College Credits

Certification (Licensing) Examinations  

Should attendance guarantee passing?

Peer Review Controversies in Academic Journals

Real Versus Phony Book Reviews  

Research Questions About the Corporate Ratings Game

Cause Versus Correlation

Differences between "popular teacher"
versus "master teacher"
versus "mastery learning"
versus "master educator."

Edutopia: Assessment (a broader look at education assessment) ---  http://www.edutopia.org/assessment

Look beyond high-stakes testing to learn about different ways of assessing the full range of student ability -- social, emotional, and academic achievement.

Bob Jensen's threads on assessment ---
http://faculty.trinity.edu/rjensen/Assess.htm

 

Mathematics Assessment: A Video Library --- http://www.learner.org/resources/series31.html

November 1, 2012 Respondus message from Richard Campbell

Is the student taking your class the same one who is taking your exams??

Keep an eye on www.respondus.com

Respondus Monitor - online exams proctor ---
http://youtu.be/lGyc_HBchOw

Software for online examinations and quizzes ---
http://faculty.trinity.edu/rjensen/Assess.htm#Examinations

 

Test Drive Running a University ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#TestDrive 

Are student usages of FaceBook correlated with lower grades?
Concerns About Social Networking, Blogging, and Twittering in Education ---
http://faculty.trinity.edu/rjensen/ListservRoles.htm 

Bob Jensen's threads on Cognitive Processes and Artificial Intelligence are at http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#CognitiveProcesses

Degrees Versus Piecemeal Distance (Online) Education

Bob Jensen's threads on memory and metacognition are at http://faculty.trinity.edu/rjensen/265wp.htm

Full Disclosure to Consumers of Higher Education (including assessment of colleges and the Spellings Commission Report) --- http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure
Also see http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Bok

Publish Exams Online ---
http://www.examprofessor.com/main/index.cfm

Controversies in Higher Education ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

Bob Jensen's threads on cheating and plagiarism ---
http://faculty.trinity.edu/rjensen/plagiarism.htm

Effort Reporting Technology for Higher Education ---
http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf

Some Thoughts on Competency-Based Training and Education ---
http://faculty.trinity.edu/rjensen/competency.htm

You can download (for free) hours of MP3 audio and the PowerPoint presentation slides from several of the best education technology workshops that I ever organized. --- http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm 

Center for Research on Learning and Teaching --- http://www.engin.umich.edu/teaching/crltengin/researchscholarship/index.html

Asynchronous Learning Advantages and Disadvantages ---
http://faculty.trinity.edu/rjensen/255wp.htm

Dark Sides of Education Technologies ---
http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

For threaded audio and email messages from early pioneers in distance education, go http://faculty.trinity.edu/rjensen/ideasmes.htm 

Full Disclosure to Consumers of Higher Education at 
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure 
American Council on Education - GED Testing --- http://www.acenet.edu/Content/NavigationMenu/ged/index.htm 
From PhD Comics: Helpers for Filling Out Teaching Evaluations --- 
http://www.phdcomics.com/comics.php?f=847  

As David Bartholomae observes, “We make a huge mistake if we don’t try to articulate more publicly what it is we value in intellectual work. We do this routinely for our students — so it should not be difficult to find the language we need to speak to parents and legislators.” If we do not try to find that public language but argue instead that we are not accountable to those parents and legislators, we will only confirm what our cynical detractors say about us, that our real aim is to keep the secrets of our intellectual club to ourselves. By asking us to spell out those secrets and measuring our success in opening them to all, outcomes assessment helps make democratic education a reality.
Gerald Graff, "Assessment Changes Everything," Inside Higher Ed, February 21, 2008 --- http://www.insidehighered.com/views/2008/02/21/graff
Gerald Graff is professor of English at the University of Illinois at Chicago and president of the Modern Language Association. This essay is adapted from a paper he delivered in December at the MLA annual meeting, a version of which appears on the MLA’s Web site and is reproduced here with the association’s permission. Among Graff’s books are Professing Literature, Beyond the Culture Wars and Clueless in Academe: How School Obscures the Life of the Mind.

Would-be lawyers in Wisconsin who have challenged the state’s policy of allowing graduates of state law schools to practice law without passing the state’s bar exam will have their day in court after all, the Associated Press reported. A federal appeals court has reinstated a lawsuit challenging the practice, which apparently is unique in the United States.
Katherine Mangan, "Appeals Court Reinstates Lawsuit Over Wisconsin's Bar-Exam Exemption," Chronicle of Higher Education, January 29, 2008 ---
Click Here


Forwarded by John Stancil

Seems that a prof allowed an 8 ½ x 11 sheet of paper for the note card during a closed-book examination.

One student says “Let me get this straight. I can use anything I put on the card?”

Prof say, “Yes.”

The day of the test, the student brought a blank sheet of paper, put it on the floor and had a grad student stand on the paper.


"How Do People Learn," Sloan-C Review, February 2004 --- 
http://www.aln.org/publications/view/v3n2/coverv3n2.htm 

Like some of the other well known cognitive and affective taxonomies, the Kolb figure illustrates a range of interrelated learning activities and styles beneficial to novices and experts. Designed to emphasize reflection on learners’ experiences, and progressive conceptualization and active experimentation, this kind of environment is congruent with the aim of lifelong learning. Randy Garrison points out that:

From a content perspective, the key is not to inundate students with information. The first responsibility of the teacher or content expert is to identify the central idea and have students reflect upon and share their conceptions. Students need to be hooked on a big idea if learners are to be motivated to be reflective and self-directed in constructing meaning. Inundating learners with information is discouraging and is not consistent with higher order learning . . . Inappropriate assessment and excessive information will seriously undermine reflection and the effectiveness of asynchronous learning. 

Reflection on a big question is amplified when it enters collaborative inquiry, as multiple styles and approaches interact to respond to the challenge and create solutions. In How People Learn: Brain, Mind, Experience, and School, John Bransford and colleagues describe a legacy cycle for collaborative inquiry, depicted in a figure by Vanderbilt University researchers  (see image, lower left).

Continued in the article


December 12, 2003 message from Tracey Sutherland [return@aaahq.org

THE EDUCATIONAL COMPETENCY ASSESSMENT (ECA) WEB SITE IS LIVE! http://www.aicpa-eca.org 

The AICPA provides this resource to help educators integrate the skills-based competencies needed by entry-level accounting professionals. These competencies, defined within the AICPA Core Competency Framework Project, have been derived from academic and professional competency models and have been widely endorsed within the academic community. Created by educators for educators, the evaluation and educational strategies resources on this site are offered for your use and adaptation.

The ECA site contains a LIBRARY that, in addition to the Core Competency Database and Education Strategies, provides information and guidance on Evaluating Competency Coverage and Assessing Student Performance.

To assist you as you assess student performance and evaluate competency coverage in your courses and programs, the ECA ORGANIZERS guide you through the process of gathering, compiling and analyzing evidence and data so that you may document your activities and progress in addressing the AICPA Core Competencies.

The ECA site can be accessed through the Educator's page of aicpa.org, or at the URL listed above.

 

The Downfall of Lecturing

Bob Jensen's threads on metacognitive learning ---
http://faculty.trinity.edu/rjensen/265wp.htm

Micro Lectures and Student-Centered Learning ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#MicroLectures

Center for Research on Learning and Teaching --- http://www.engin.umich.edu/teaching/crltengin/researchscholarship/index.html


Great Lectures May Be Learning Losers
"Appearances can be deceiving: instructor fluency increases perceptions of learning without increasing actual learning," by Shana K. Carpenter, Miko M. Wilford, Nate Kornell, Kellie M. Mullaney, Springer.com, May 2013 ---
http://link.springer.com/article/10.3758%2Fs13423-013-0442-z

Abstract
The present study explored the effects of lecture fluency on students’ metacognitive awareness and regulation. Participants watched one of two short videos of an instructor explaining a scientific concept. In the fluent video, the instructor stood upright, maintained eye contact, and spoke fluidly without notes. In the disfluent video, the instructor slumped, looked away, and spoke haltingly with notes. After watching the video, participants in Experiment 1 were asked to predict how much of the content they would later be able to recall, and participants in Experiment 2 were given a text-based script of the video to study. Perceived learning was significantly higher for the fluent instructor than for the disfluent instructor (Experiment 1), although study time was not significantly affected by lecture fluency (Experiment 2). In both experiments, the fluent instructor was rated significantly higher than the disfluent instructor on traditional instructor evaluation questions, such as preparedness and effectiveness. However, in both experiments, lecture fluency did not significantly affect the amount of information learned. Thus, students’ perceptions of their own learning and an instructor’s effectiveness appear to be based on lecture fluency and not on actual learning.

Downfall of Lecturing --- http://faculty.trinity.edu/rjensen/Assess.htm#DownfallOfLecturing

Two Ongoing Papers by Bob Jensen

 


Socratic Method Thread on the AECM

September 25, 2010 message from super accounting teacher Joe Hoyle

-----Original Message----- From: Hoyle, Joe [mailto:jhoyle@richmond.edu
Sent: Saturday, September 25, 2010 8:42 AM
To: Jensen, Robert Subject: RE: Question for Joe Hoyle: The Quickest 2011 CPA Exam Breakdown You'll Ever Read

Hi Bob,

Hope this finds you well. I just got through giving a bunch of tests last week (Intermediate Accounting II and Introduction to Financial Accounting) and, as always, some learned it all and some learned a lot less. I am using my own new Financial Accounting textbook this semester. As you may know, the book is written in an entirely Socratic Method (question and answer style). I find that approach stimulates student curiosity much better than the traditional textbook which uses what I call a sermon or monologue style. The Socratic Method has been around for 2,500 years -- isn't it strange that it has been ignored as a possible textbook model? I'm not a big fan of college education presently (that is college education and not just accounting education). There are three major components to education: professors, students, and the textbook (or other course material). It is hard to change the professors and the students. I think if we want to create a true evolution in college education in a hurry (my goal), the way to do that is produce truly better college textbooks. I wish more college accounting professors would think seriously about how textbooks could be improved. At the AAA meeting in San Francisco in August, I compared a 1925 intermediate accounting textbook to a 2010 intermediate accounting textbook and there was a lot less difference than you might have expected. Textbooks have simply failed to evolve very much (okay, they are now in color). It is my belief that textbooks were created under a "conveyance of information" model. An educated person writes a textbook to convey tons of information to an uneducated person. In the age of Google, Yahoo, Facebook, YouTube, and Wikipedia, I think the need to convey information is no longer so urgent. I think we need to switch to a "thinking about information" model. And, if that is the goal, the Socratic Method is perfect. You can start off with a question like "Company X reports inventory at $500,000. What does that mean? Is it the cost or is the retail value? And, if it is one, why is not the other?" Accounting offers thousands of such delightful questions.

But, I digress -- you asked about CPAreviewforFREE. We just finished our 117th week and it has been so much fun. We had a person write in this week (on our Facebook page) to tell us that she had made three 99s and an 89. Over the summer, we averaged about 300,000 page views per week. That is page views and not hits but that is still a lot of people answering a lot of questions.

We are currently writing new questions for the new exam starting in 2011 including task-based simulations, IFRS, and written communications questions for BEC. I personally think the exam will change less than people expect. Currently, roughly 50 percent of the people who take a part pass that part. I would expect that in January under the new CPA exam format, roughly 50 percent of the people who take a part will pass that part. And, I would guess it will be almost exactly the same 50 percent.

However, to be honest with you, we are in the process of adding a subscription service. I don't know if you ever go to ESPN.com but they give a lot of free information (Red Sox beat the Yankees last night 10-8) but they also have a subscription service where you can learn about things in more depth for a monthly fee (almost like a newspaper). Our 2,100 free questions and answers will ALWAYS stay free. But we found that people really wanted to have some content. If they missed a question on earnings per share, for example, they wanted to know more about how convertible bonds are handled in that computation. They didn't feel the need to pay $2,500 (don't get me started on what I think about that) but they wanted a bit more information.

To date, we have subscription content for FAR and Regulation. Each is available for $15 per month which I think is a reasonable price (especially in a recession). (As an aside, I have long felt that the high cost of CPA review programs keeps poor people out of the profession which I think is extremely unfair and even unAmerican.) In our FAR content, for example, we have 621 slides that cover everything I could think of that FAR will probably ask about. There are probably more slides in Regulation but I haven't counted them yet. BEC and Auditing will be ready as quickly as possible. When you have no paid employees, things only get done as fast as you can get them done.

Bob, I was delighted to see your name on my email this morning. I'm actually in Virginia Beach on a 2 day vacation but decided I'd rather write you than go walk on the beach :). If I can ever address more questions about textbooks, CPAreviewforFREE, or the Red Sox and the Yankees, please let me know. As my buddy Paul Clikeman (who is on the AECM list) will tell you, I am a person of opinion.

Joe

September 25, 2010 for New Zealand Accounting Practitioner Robert Bruce Walker

-----Original Message-----
From: THE Internet Accounting List/Forum for CPAs [mailto:CPAS-L@LISTSERV.LOYOLA.EDU]
On Behalf Of Robert Bruce Walker
Sent: Sunday, September 26, 2010 5:16 AM
To: CPAS-L@LISTSERV.LOYOLA.EDU
Subject: Re: Question for Joe Hoyle: The Quickest 2011 CPA Exam Breakdown You'll Ever Read

Interesting thesis from your friend in regard to teaching method. I must admit, though I attempt to use the Socratic method, I am suspicious of it. I use it when teaching my staff. I lay out the double entry and leave the conceptual points empty and invite my employee to complete the entries. What happens is that I continue to ask the questions giving more and more away until I lose my temper and complete the exercise and say: 'There you are. Why can't you do that?!?' This may merely tell you that I am too impatient, probably true.

The problem with the Socratic method is that it is based on a proposition related to the nature of knowledge. Plato, or perhaps Socrates, held the view that true knowledge was innate. It is there from the moment of birth or earlier and the Socratic method is applied to reveal or assist to reveal that which lies within the knowledgeable but ill-formed brain. But then it is concerned only with the knowledge that is true knowledge, which essentially reduces to a knowledge of mathematics or a priori deductive 'truth'- we all have, for instance, Pythagoras' theorem in our heads. Other 'things' are not knowledge. That is material that is derived from sense experience. Whilst I only have a cursory knowledge of his work, I think that Chomsky essentially adopts a view that language lies innate in the human baby for otherwise they could not acquire a facility with language as rapidly as they do.

I do recall many years ago studying a Platonic dialogue, I can't even remember its name, in which Socrates attempts to demonstrate how a geometrical problem can solved by a slave boy simply from Socrates' questioning. The slave boy doesn't get it. Socrates is reduced to drawing a picture in the sand. I was taught that this necessity is the implied concession from Plato that the Socratic method doesn't actually work.

Does the discipline that is accounting lie latent and ill-formed in our brains? That might depend on what accounting actually is. Possibly, as I am essentially innumerate, accounting is the only mathematical thing I have ever truly understood. Once I saw the essence of it - I can remember where and when this happened (Putney public library, London) and from that moment I held my sense of accounting as if a religious truth. That sense of religiosity has driven everything I have done ever since. In other words I have a sense of wonder. But then I know that accounting is as much about words as numbers. It is where the words meet the numbers, where the numerical ideal of accounting meets the reality of economic events, that the accountant must stand.

Here is a thought from TS Eliot, the quintessential Trans-Atlantic soul, in his poem The Hollow Men:

Between the idea And the reality Between the motion And the act Falls the Shadow

September 26, 2010 reply from Bob Jensen

-----Original Message-----
From: Jensen, Robert
Sent: Sunday, September 26, 2010 5:16 AM
To: CPAS-L@LISTSERV.LOYOLA.EDU
Subject: RE: Question for Joe Hoyle: The Quickest 2011 CPA Exam Breakdown You'll Ever Read

 Hi Robert and Todd,

 Socratic Method is the preferred pedagogy in law schools --- http://en.wikipedia.org/wiki/Socratic_Method 

Psychologists indeed study memory and learning with particular focus on metacognition ---
http://faculty.trinity.edu/rjensen/265wp.htm 

Socratic Method has various metacognitive benefits.

I don't think we should take Socrates/Plato too literally about latent knowledge. There is some evidence of latent knowledge such as when three year olds, notably savants, can play music or perform math tasks they've never been taught.

But accountants and chemists have to be taught. The question is by what pedagogy? The Socratic Method is more closely aligned with "learning on your own" using Socratic questions to guide students learning and reasoning on their own. It engages critical thinking and reasoning. It is not, however, as efficient as most other pedagogies when a lot of material must be covered in a short period of time. For example, I don't particularly recommend the Socratic Method in an audience of 200 CPAs seeking to quickly pick up tips about updates to the tax code or IFRS in a six-hour CPE session.

Even though Joe Hoyle attempts Socratic Method by giving students problems that they must then solve on their own, Joe does use a textbook that guides their learning asynchronously. He also lectures.

A better example of "learning on your own" is the BAM pedagogy in intermediate accounting which has demonstrated superiority for long-term memory in spite of only having one lecture a year and no textbook. This is closer to the adage that experience is the best teacher. But "learning on your own" is a painful and slimy-sweat pedagogy when administered at its best.

Professors Catenach, Croll, and Grinacker received an AAA Innovation in Accounting Education Award for introducing the BAM pedagogy in two semesters of Intermediate Accounting at the University of Virginia. Among other things was a significant increase in performance on the CPA examination in a program that, under the BAM pedagogy, had no assigned textbook and taught even less to the CPA examination than before instigating the BAM pedagogy.
http://faculty.trinity.edu/rjensen/265wp.htm 

The undisputed advantage of the BAM pedagogy is better long-term memory.

BAM is closest to an ideal when combined with competency-based assessment, although such assessment might be carried too far in terms of limiting critical thinking learning (if students tend to rote memorize for their competency-based final examinations) ---
http://faculty.trinity.edu/rjensen/assess.htm#ECA 

The BAM pedagogy is probably Socratic Method at nearly its best in terms of learning. There can be a price to be paid in the sense that it is more time consuming for students (probably far too much for a single course taken) and tends to burn out instructors and students. If a student had to take five simultaneous courses all using the BAM pedagogy, the top students would probably drop out of college from lack of sleep and health deterioration.

My threads on alternate pedagogies, including Mastery Learning, are at --- http://faculty.trinity.edu/rjensen/assess.htm#Teaching 
Mastery Learning, like the BAM pedagogy, burns out students and instructors.

By the way it is not so simple to test “learning” because the term “learning” is very ambiguous. We easiest test learning of facts such as a geography test on state capitols or a spelling bee. We can test problem solving ability such as in a mathematics test. However, since students vary so much at the beginning of a math course, it is difficult to measure what the incremental benefit of the course has been apart from measuring problem solving ability at the start of the course.

 Bob Jensen

September 26, 2010 reply from Joe Hoyle,

Bob,
I can’t speak for Socrates or Plato about the innate nature of knowledge but I do think students can be led to figure things out on their own by the use of carefully sequenced questions.  And, isn’t that what we want:  for them to figure things out on their own now so they can figure things out on their own after they leave our class.

 

Virtually all of us have been taught by a standard lecture style so it is difficult to even conceive of something different.   Let me give you an example of a question and answer class.

 

After about three weeks of the semester, I started my sophomore class recently with the following series of questions.   As it happened, there was no reading here.  The students pretty much (but not entirely) started out as blank slates which I think Socrates would have preferred.   I’ll give the questions here; you can figure out how the students would have answered.   I do try to move through these questions at lightning speed—I want students on the edge of their seats.

 

--My company owns a few thousand shares of Ford Motor Company.  These shares cost $40,000 but had a fair value of $65,000.   On a set of financial statements, where is this investment reported?

--Why is it shown as an asset?

--What do I mean by cost?

--What do I mean by fair value?

--Do you think US GAAP allows my company to make the choice of whether to use cost or fair value for reporting purposes?

--Okay if US GAAP only allows one method of reporting, let’s take a class vote on whether FASB would have picked cost or fair value.   (Note – the vote was roughly 50-50.)

--(To a student):  You picked cost – what would be the advantages of reporting cost?

--(To a different student):   You picked fair value – what would be the advantage of reporting fair value?

--Is one method totally right and one method totally wrong?   Is that what we are trying to determine -- right versus wrong?

--Why did the company make this investment?

--When will they want to sell this investment?

--Are they able to sell the investment immediately if they so choose?

--Can they get roughly $65,000 immediately if they decide to sell?

--US GAAP requires this investment to be reported at fair value.   What does that tell us?

--My company owns two acres of land that it bought to use for a parking lot at some point in the future.  The land cost $40,000 but had a fair value of $65,000.   On a set of financial statements, where is this investment reported?

--Okay, this is another asset.   Do you think US GAAP allows my company to make the choice of whether to use cost or fair value for reporting purposes?

--If the land is like the investment, how will it be reported?

--When will my company choose to sell this land?

--Will the company be able to sell the land immediately if it so chooses?

--Can they get roughly $65,000 immediately if they decide to sell the land?

--If they didn’t buy the land to sell, if they cannot necessarily sell the land immediately, and if there is no market to create an immediate sale, is there sufficient reason to report the land at its $65,000 fair value?

--So, investments are reported at fair value whereas land is reported at cost.   Does it surprise you that these two assets are reported in different ways?

--Let’s take one more and see if you can figure it out – your company has inventory that has a cost of $40,000 and a fair value of $65,000.

--Did you buy the inventory to sell or to keep and use?

--Are you sure you can get the $65,000 right now if you need the money?

--Are you sure you can make a sale immediately?

--Inventory resembles an investment in that it was bought in hopes of selling for a gain.  However, it also resembles land in that a sale at a certain amount is not guaranteed without a formal market.   Consequently, whether you use cost or fair value is not obvious.   US GAAP says inventory should be reported at cost (we will later discuss lower of cost or market).   What does that tell us about when we should report an asset at fair value?

--On our first test, if I gave you another asset that we have not yet discovered, could you determine whether it was likely to be reported at cost or fair value?

It took us about 20 minutes to get this far in the class and every student had to answer at least one question orally.   At the end, I felt that they all had a better understanding of the reporting of assets.   Often students have the view that all accounts report the same information.   I want them to understand that US GAAP requires different accounts to be reported in different ways and that each way has its own logic based on the rules of accounting.   I want them to be engaged and I want them to figure as much out for themselves as possible.   At the end, I think they know that investments in stocks are reported at fair value whereas land and inventory are reported at cost (well, until we discuss lower of cost or market).   Better still, I think they understand why and can make use of that knowledge.

Does it always work?   Oh, of course not.   But I do think it gets them thinking about accounting rather than memorizing accounting.   One day in 1991, I switched overnight from lecturing to asking questions.   Try it – you might like it.

Joe

My threads on alternate pedagogies, including Mastery Learning, are at --- http://faculty.trinity.edu/rjensen/assess.htm#Teaching 
Mastery Learning, like the BAM pedagogy, burns out students and instructors.


My Hero at the American Accounting Association Meetings in San Antonio on August 13, 2002 --- Amy Dunbar

How to students evaluate Amy Dunbar's online tax courses?

This link is a pdf doc that I will be presenting at a CPE session with Bob Jensen, Nancy Keeshan, and Dennis Beresford at the AAA on Tuesday. I updated the paper I wrote that summarized the summer 2001 online course. You might be interested in the exhibits, particularly Exhibit II, which summarizes student responses to the learning tools over the two summers. This summer I used two new learning tools: synchronous classes (I used Placeware) and RealPresenter videos. My read of the synchronous class comments is that most students liked having synchronous classes, but not often and not long ones! 8 of the 57 responding students thought the classes were a waste of time. 19 of my students, however, didn't like the RealPresenter videos, partly due to technology problems. Those who did like them, however, really liked them and many wanted more of them. I think that as students get faster access to the Internet, the videos will be more useful.

http://www.sba.uconn.edu/users/adunbar/genesis_of_an_online_course_2002.pdf 

Amy Dunbar 
UConn


Education is an admirable thing, but it is well to remember from time to time that nothing that is worth learning can be taught.
Oscar Wilde

"The Objective of Education is Learning, Not Teaching (audio version available)," University of Pennsylvania's Knowledge@Wharton, August 20, 2008 --- http://knowledge.wharton.upenn.edu/article.cfm;jsessionid=9a30b5674a8d333e4d18?articleid=2032

In their book, Turning Learning Right Side Up: Putting Education Back on Track, authors Russell L. Ackoff and Daniel Greenberg point out that today's education system is seriously flawed -- it focuses on teaching rather than learning. "Why should children -- or adults -- be asked to do something computers and related equipment can do much better than they can?" the authors ask in the following excerpt from the book. "Why doesn't education focus on what humans can do better than the machines and instruments they create?"

"Education is an admirable thing, but it is well to remember from time to time that nothing that is worth learning can be taught."
   -- Oscar Wilde

Traditional education focuses on teaching, not learning. It incorrectly assumes that for every ounce of teaching there is an ounce of learning by those who are taught. However, most of what we learn before, during, and after attending schools is learned without its being taught to us. A child learns such fundamental things as how to walk, talk, eat, dress, and so on without being taught these things. Adults learn most of what they use at work or at leisure while at work or leisure. Most of what is taught in classroom settings is forgotten, and much or what is remembered is irrelevant.

In most schools, memorization is mistaken for learning. Most of what is remembered is remembered only for a short time, but then is quickly forgotten. (How many remember how to take a square root or ever have a need to?) Furthermore, even young children are aware of the fact that most of what is expected of them in school can better be done by computers, recording machines, cameras, and so on. They are treated as poor surrogates for such machines and instruments. Why should children -- or adults, for that matter -- be asked to do something computers and related equipment can do much better than they can? Why doesn't education focus on what humans can do better than the machines and instruments they create?

When those who have taught others are asked who in the classes learned most, virtually all of them say, "The teacher." It is apparent to those who have taught that teaching is a better way to learn than being taught. Teaching enables the teacher to discover what one thinks about the subject being taught. Schools are upside down: Students should be teaching and faculty learning.

After lecturing to undergraduates at a major university, I was accosted by a student who had attended the lecture. After some complimentary remarks, he asked, "How long ago did you teach your first class?"

I responded, "In September of 1941."

"Wow!" The student said. "You mean to say you have been teaching for more than 60 years?"

"Yes."

"When did you last teach a course in a subject that existed when you were a student?"

This difficult question required some thought. After a pause, I said, "September of 1951."

"Wow! You mean to say that everything you have taught in more than 50 years was not taught to you; you had to learn on your own?"

"Right."

"You must be a pretty good learner."

I modestly agreed.

The student then said, "What a shame you're not that good a teacher."

The student had it right; what most faculty members are good at, if anything, is learning rather than teaching. Recall that in the one-room schoolhouse, students taught students. The teacher served as a guide and a resource but not as one who force-fed content into students' minds.

Ways of Learning

There are many different ways of learning; teaching is only one of them. We learn a great deal on our own, in independent study or play. We learn a great deal interacting with others informally -- sharing what we are learning with others and vice versa. We learn a great deal by doing, through trial and error. Long before there were schools as we know them, there was apprenticeship -- learning how to do something by trying it under the guidance of one who knows how. For example, one can learn more architecture by having to design and build one's own house than by taking any number of courses on the subject. When physicians are asked whether they leaned more in classes or during their internship, without exception they answer, "Internship."

In the educational process, students should be offered a wide variety of ways to learn, among which they could choose or with which they could experiment. They do not have to learn different things the same way. They should learn at a very early stage of "schooling" that learning how to learn is largely their responsibility -- with the help they seek but that is not imposed on them.

The objective of education is learning, not teaching.

There are two ways that teaching is a powerful tool of learning. Let's abandon for the moment the loaded word teaching, which is unfortunately all too closely linked to the notion of "talking at" or "lecturing," and use instead the rather awkward phrase explaining something to someone else who wants to find out about it. One aspect of explaining something is getting yourself up to snuff on whatever it is that you are trying to explain. I can't very well explain to you how Newton accounted for planetary motion if I haven't boned up on my Newtonian mechanics first. This is a problem we all face all the time, when we are expected to explain something. (Wife asks, "How do we get to Valley Forge from home?" And husband, who does not want to admit he has no idea at all, excuses himself to go to the bathroom; he quickly Googles Mapquest to find out.) This is one sense in which the one who explains learns the most, because the person to whom the explanation is made can afford to forget the explanation promptly in most cases; but the explainers will find it sticking in their minds a lot longer, because they struggled to gain an understanding in the first place in a form clear enough to explain.

The second aspect of explaining something that leaves the explainer more enriched, and with a much deeper understanding of the subject, is this: To satisfy the person being addressed, to the point where that person can nod his head and say, "Ah, yes, now I understand!" explainers must not only get the matter to fit comfortably into their own worldview, into their own personal frame of reference for understanding the world around them, they also have to figure out how to link their frame of reference to the worldview of the person receiving the explanation, so that the explanation can make sense to that person, too. This involves an intense effort on the part of the explainer to get into the other person's mind, so to speak, and that exercise is at the heart of learning in general. For, by practicing repeatedly how to create links between my mind and another's, I am reaching the very core of the art of learning from the ambient culture. Without that skill, I can only learn from direct experience; with that skill, I can learn from the experience of the whole world. Thus, whenever I struggle to explain something to someone else, and succeed in doing so, I am advancing my ability to learn from others, too.

Learning through Explanation

This aspect of learning through explanation has been overlooked by most commentators. And that is a shame, because both aspects of learning are what makes the age mixing that takes place in the world at large such a valuable educational tool. Younger kids are always seeking answers from older kids -- sometimes just slightly older kids (the seven-year old tapping the presumed life wisdom of the so-much-more-experienced nine year old), often much older kids. The older kids love it, and their abilities are exercised mightily in these interactions. They have to figure out what it is that they understand about the question being raised, and they have to figure out how to make their understanding comprehensible to the younger kids. The same process occurs over and over again in the world at large; this is why it is so important to keep communities multi-aged, and why it is so destructive to learning, and to the development of culture in general, to segregate certain ages (children, old people) from others.

Continued in article

Bob Jensen's threads on assessment, learning, and technology in education are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm


Question
What types of students benefit most versus least from video lectures?

"Video Lectures May Slightly Hurt Student Performance," by Sophia Li, Inside Higher Ed, Chronicle of Higher Education, June 21, 2010 ---
http://chronicle.com/blogPost/Video-Lectures-May-Slightly/24963/

No clear winner emerges in the contest between video and live instruction, according to the findings of a recent study led by David N. Figlio, a professor of education and social policy at Northwestern University. The study found that students who watched lectures online instead of attending in-person classes performed slightly worse in the course over all.

A previous analysis by the U.S. Department of Education that examined existing research comparing online and live instruction favored online learning over purely in-person instruction, according to the working paper by Mr. Figlio and his colleagues, which was released this month by the National Bureau of Economic Research.

But Mr. Figlio's study contradicted those results, showing that live instruction benefits Hispanic students, male students, and lower-achieving students in particular.

Colleges and universities that are turning to video lectures because of their institutions' tight budgets may be doing those students a disservice, said Mark Rush, a professor of economics at the University of Florida and one of the working paper's authors.

More research will be necessary, however, before any definite conclusions can be drawn about the effectiveness of video lectures, said Lu Yin, a graduate student at the University of Florida who worked on the project. Future research could study the effectiveness of watching lectures online for topics other than microeconomics, which was the subject of the course evaluated in the study, Ms. Yin said.

Jensen Comment
Studies like this just do not extrapolate well into the real world, because so very, very much depends upon both how instructors use videos and how students use videos. My students had to take my live classes, but my Camtasia video allowed them to keep going over and over, at their own learning pace, technical modules (PQQ Possible Quiz Questions) until they got technical things down pat ---
http://www.cs.trinity.edu/~rjensen/video/acct5342/
Students who did not use the videos as intended usually paid a price.

However, some outcomes in the above study conform to my priors. For example, Brigham Young University (BYU) has very successfully replaced live lectures with variable-speed video lectures in the first two basic accounting courses ---
http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo

However, BYU students most likely have mostly high achieving students to begin with, especially in accounting. It would be interesting to formally study the use such variable-speed video in colleges having a higher proportion of lower-achieving students. My guess is that the variable-speed video lectures would be less effective with lower-achieving students who are not motivated to keep replaying videos until they get the technical material down pat. The may be lower achieving in great measure because they are less motivated learners or learners who have too many distractions (like supportingchildren) to have as much quality study time.

And live lecturing/mentoring is hard to put in a single category because there are so many types of live lecturing/mentoring ---
http://faculty.trinity.edu/rjensen/assess.htm#Teaching

In conclusion, I think much depends upon the quality of the video versus lecture, class size, and student motivation. Videos offer the tremendous advantage of instant replay and being able to adjust to the best learning pace of the student. Live lectures can, and often do, lead to more human interactive factors that can be good (if they motivate) and bad (if they distract or instill dysfunctional fear).

The best video lectures are probably those that are accompanied with instant messaging with an instructor or tutor that can provide answers or clues to answers not on the video.


"More Faculty Members Adopt 'Student Centered' Teaching," Chronicle of Higher Education, October 18, 2009 ---
http://chronicle.com/article/Chart-More-Faculty-Members/48848/

Professors are warming to new methods of teaching and testing that experts say are more likely to engage students, a UCLA survey found last year. Below are percentages of faculty members who said they used these approaches in all or most of the courses they taught. Those trends may continue, UCLA says, as full professors retire. Assistant professors were much more likely, for example, to structure teaching around small groups of students, while full professors were more likely to lecture extensively.
  2005 2008
Selected teaching methods
Cooperative learning (small groups of students) 48% 59%
Using real-life problems* n/a 56%
Group projects 33% 36%
Multiple drafts of written work 25% 25%
Student evaluations of one another’s work 16% 24%
Reflective writing/journaling 18% 22%
Electronic quizzes with immediate feedback in class* n/a 7%
Extensive lecturing (not student-centered) 55% 46%
Selected examination methods
Short-answer exams 37% 46%
Term and research papers 35% 44%
Multiple-choice exams 32% 33%
Grading on a curve 19% 17%
* Not asked in the 2005 survey
Note: The figures are based on survey responses of 22,562 faculty members at 372 four-year colleges and universities nationwide. The survey was conducted in the fall and winter of 2007-8 and covered full-time faculty members who spent at least part of their time teaching undergraduates. The figures were statistically adjusted to represent the total population of full-time faculty members at four-year institutions. Percentages are rounded.
Source: "The American College Teacher: National Norms for the 2007-8 HERI Faculty Survey," University of California at Los Angeles Higher Education Research Institute

Bob Jensen's threads on metacognitive learning ---
http://faculty.trinity.edu/rjensen/265wp.htm

Bob Jensen's threads on higher education are at
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

 


"Web Surfing in the Classroom: Sound Familiar?" by Catherine Rampell, Chronicle of Higher Education, May 15, 2008 --- http://chronicle.com/wiredcampus/index.php?id=3004&utm_source=wc&utm_medium=en

Over at the New York Times’s Freakonomics blog, Yale Law School professor Ian Ayres praises the University of Chicago Law School’s decision to eliminate Internet access in some classrooms. But more importantly, he recounts an amusing sketch from the Yale’s “Law Revue” skit night, which is worth sharing in full:

One of the skits had a group of students sitting at desks, facing the audience, listening to a professor drone on.

All of the students were looking at laptops except for one, who had a deck of cards and was playing solitaire. The professor was outraged and demanded that the student explain why she was playing cards. When she answered “My laptop is broken,” I remember there was simultaneously a roar of laughter from the student body and a gasp from the professors around me. In this one moment, we learned that something new was happening in class.

Bob Jensen's threads on higher education controversies are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm


Random Thoughts (about learning from a retired professor of engineering) ---  http://www4.ncsu.edu/unity/lockers/users/f/felder/public/Columns.html

Dr. Felder's column in Chemical Engineering Education

Focus is heavily upon active learning and group learning.

Bob Jensen's threads on learning are in the following links:

http://faculty.trinity.edu/rjensen/assess.htm

http://faculty.trinity.edu/rjensen/255wp.htm

http://faculty.trinity.edu/rjensen/265wp.htm


March 3, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu

WHAT LEADS TO ACHIEVING SUCCESS IN DISTANCE EDUCATION?

"Achieving Success in Internet-Supported Learning in Higher Education," released February 1, 2005, reports on the study of distance education conducted by the Alliance for Higher Education Competitiveness (A-HEC). A-HEC surveyed 21 colleges and universities to "uncover best practices in achieving success with the use of the Internet in higher education." Some of the questions asked by the study included:

"Why do institutions move online? Are there particular conditions under which e-Learning will be successful?"

"What is the role of leadership and by whom? What level of investment or commitment is necessary for success?"

"How do institutions evaluate and measure success?"

"What are the most important and successful factors for student support and faculty support?"

"Where do institutions get stuck? What are the key challenges?"

The complete report is available online, at no cost, at http://www.a-hec.org/e-learning_study.html.

The "core focus" of the nonprofit Alliance for Higher Education Competitiveness (A-HEC) "is on communicating how higher education leaders are creating positive change by crystallizing their mission, offering more effective academic programs, defining their role in society, and putting in place balanced accountability measures." For more information, go to http://www.a-hec.org/ . Individual membership in A-HEC is free.


Hi Yvonne,

For what it is worth, my advice to new faculty is at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm 

One thing to remember is that the employers of our students (especially the public accounting firms) are very unhappy with our lecture/drill pedagogy at the introductory and intermediate levels. They believe that such pedagogy turns away top students, especially creative and conceptualizing students. Employers  believe that lecture/drill pedagogy attracts savant-like memorizers who can recite their lessons book and verse but have few creative talents and poor prospects for becoming leaders. The large accounting firms believed this so strongly that they donated several million dollars to the American Accounting Association for the purpose of motivating new pedagogy experimentation. This led to the Accounting Change Commission (AECC) and the mixed-outcome experiments that followed. See http://accounting.rutgers.edu/raw/aaa/facdev/aecc.htm 

The easiest pedagogy for faculty is lecturing, and it is appealing to busy faculty who do not have time for students outside the classroom. When lecturing to large classes it is even easier because you don't have to get to know the students and have a great excuse for using multiple choice examinations and graduate student teaching assistants. I always remember an economics professor at Michigan State University who said that when teaching basic economics it did not matter whether he had a live class of 300 students or a televised class of 3,000 students. His full-time teaching load was three hours per week in front of a TV camera. He was a very good lecturer and truly loved his three-hour per week job!

Lecturing appeals to faculty because it often leads to the highest teaching evaluations.  Students love faculty who spoon feed and make learning seem easy.  It's much easier when mom or dad spoon the pudding out of the jar than when you have to hold your own spoon and/or find your own jar.

An opposite but very effective pedagogy is the AECC (University of Virginia) BAM Pedagogy that entails live classrooms with no lectures. BAM instructors think it is more important for students to learn on their own instead of sitting through spoon-fed learning lectures. I think it takes a special kind of teacher to pull off the astoundingly successful BAM pedagogy. Interestingly, it is often some of our best lecturers who decided to stop lecturing because they experimented with the BAM and found it to be far more effective for long-term memory. The top BAM enthusiasts are Tony Catanach at Villanova University and David Croll at the University of Virginia. Note, however, that most BAM applications have been at the intermediate accounting level. I have my doubts (and I think BAM instructors will agree) that BAM will probably fail at the introductory level. You can read about the BAM pedagogy at http://faculty.trinity.edu/rjensen/265wp.htm 

At the introductory level we have what I like to call the Pincus (User Approach) Pedagogy. Karen Pincus is now at the University of Arkansas, but at the time that her first learning experiments were conducted, she taught basic accounting at the University of Southern California. The Pincus Pedagogy is a little like both the BAM and the case method pedagogies. However, instead of having prepared learning cases, the Pincus Pedagogy sends students to on-site field visitations where they observe on-site operations and are then assigned tasks to creatively suggest ways of improving existing accounting, internal control, and information systems. Like the BAM, the Pincus Pedagogy avoids lecturing and classroom drill. Therein lies the controversy. Students and faculty in subsequent courses often complain that the Pincus Pedagogy students do not know the fundamental prerequisites of basic accounting needed for intermediate and advanced-level accounting courses.  Two possible links of interest on the controversial Pincus Pedagogy are as follows:  

Where the Pincus Pedagogy and the BAM Pedagogy differ lies in subject matter itself and stress on creativity. The BAM focuses on traditional subject matter that is found in such textbooks as intermediate accounting textbooks. The BAM Pedagogy simply requires that students learn any way they want to learn on their own since students remember best what they learned by themselves. The Pincus Pedagogy does not focus on much of the debit and credit "rules" found in most traditional textbooks. Students are required to be more creative at the expense of memorizing the "rules."

The Pincus Pedagogy is motivated by the belief that traditional lecturing/drill pedagogy at the basic accounting and tax levels discourages the best and more-creative students to pursue careers in the accountancy profession. The BAM pedagogy is motivated more by the belief that lecturing is a poor pedagogy for long-term memory of technical details. What is interesting is that the leading proponents of getting away from the lecture/drill pedagogy (i.e., Karen Pincus and Anthony Catenach) were previously two of the very best lecturers in accountancy. If you have ever heard either of them lecture, I think you would agree that you wish all your lecturers had been only half as good. I am certain that both of these exceptional teachers would agree that lecturing is easier than any other alternatives. However, they do not feel that lecturing is the best alternative for top students.

Between lecturing and the BAM Pedagogy, we have case method teaching. Case method teaching is a little like lecturing and a little like the BAM with some instructors providing answers in case wrap ups versus some instructors forcing students to provide all the answers. Master case teachers at Harvard University seldom provide answers even in case wrap ups, and often the cases do not have any known answer-book-type solutions. The best Harvard cases have alternative solutions with success being based upon discovering and defending an alternative solution. Students sometimes interactively discover solutions that the case writers never envisioned. I generally find case teaching difficult at the undergraduate level if students do not yet have the tools and maturity to contribute to case discussions. Interestingly, it may be somewhat easier to use the BAM at the undergraduate level than Harvard-type cases. The reason is that BAM instructors are often dealing with more rule-based subject matter such as intermediate accounting or tax rather than conceptual subject matter such as strategic decision making, business valuation, and financial risk analysis.

The hardest pedagogy today is probably a Socratic pedagogy online with instant messaging communications where an instructor who's on call about 60 hours per week from his or her home. The online instructor monitors the chats and team communications between students in the course at most any time of day or night. Amy Dunbar can tell you about this tedious pedagogy since she's using it for tax courses and will be providing a workshop that tells about how to do it and how not to do it. The next scheduled workshop precedes the AAA Annual Meetings on August 1, 2003 in Hawaii. You can also hear Dr. Dunbar and view her PowerPoint show from a previous workshop at http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002 

In conclusion, always remember that there is no optimal pedagogy in all circumstances. All learning is circumstantial based upon such key ingredients as student maturity, student motivation, instructor talent, instructor dedication, instructor time, library resources, technology resources, and many other factors that come to bear at each moment in time. And do keep in mind that how you teach may determine what students you keep as majors and what you turn away. 

I tend to agree with the accountancy firms that contend that traditional lecturing probably turns away many of the top students who might otherwise major in accountancy. 

At the same time, I tend to agree with students who contend that they took accounting courses to learn accounting rather than economics, computer engineering, and behavioral science.

Bob Jensen

-----Original Message----- 
From: Lou&Bonnie [mailto:gyp1@EARTHLINK.NET]  
Sent: Thursday, January 16, 2003 5:03 PM

I am a beginning accounting instructor (part-time) at a local community college. I am applying for a full-time faculty position, but am having trouble with a question. Methodology in accounting--what works best for a diversified group of individuals. Some students work with accounting, but on a computer and have no understanding of what the information they are entering really means to some individuals who have no accounting experience whatsoever. What is the best methodology to use, lecture, overhead, classroom participation? I am not sure and I would like your feedback. Thank you in advance for your help. 

Yvonne


January 20, 2003 reply from Thomas C. Omer [omer@UIC.EDU

Don’t forget about Project Discovery going on at the University of Illinois Champaign-Urbana

Thomas C. Omer Associate Professor 
Department of Accounting University of Illinois At Chicago 
The Art of Discovery: Finding the forest in spite of the trees.

Thanks for reminding me Tom. A good link for Project Discovery is at http://accounting.rutgers.edu/raw/aaa/facdev/aeccuind.htm 


January 17, 2003 reply from David R. Fordham [fordhadr@JMU.EDU

I'll add an endorsement to Bob's advice to new teachers. His page should be required reading for Ph.D.s.

And I'll add one more tidbit.

Most educators overlook the distinction between "lectures" and "demonstrations".

There is probably no need for any true "lecture" in the field of accounting at the college level, even though it is still the dominant paradigm at most institutions.

However, there is still a great need for "live demonstrations", **especially** at the introductory level.

Accounting is a complex process. Introductory students in ANY field learn more about complex processes from demonstrations than probably any other method.

Then, they move on and learn more from "practicing" the process, once they've learned the steps and concepts of the process. And for intermediate and advanced students, practice is the best place to "discover" the nuances and details.

While "Discovery" is probably the best learning method of all, it is frequently very difficult to "discover" a complex process correctly from its beginning, on your own. Thus, a quick demonstration can often be of immense value at the introductory level. It's an efficient way of communicating sequences, relationships, and dynamics, all of which are present in accounting processes.

Bottom line: You can (and should) probably eliminate "lectures" from your classes. You should not entirely eliminate "demonstrations" from your classes.

Unfortunately, most education-improvement reform literature does not draw the distinction: anytime the teacher is doing the talking in front of a class, using blackboard and chalk or PowerPoint, they label it "lecture" and suggest you don't do it! This is, in my view, oversimplification, and very bad advice.

Your teaching will change a whole lot (for the better!) once you realize that students only need demonstrations of processes. You will eliminate a lot of material you used to "lecture" on. This will make room for all kinds of other things that will improve your teaching over the old "lecture" method: discussions, Socratic dialogs, cases and dilemmas, even some entertainment here and there.

Plus, the "lectures" you retain will change character. Take your cue from Mr. Wizard or Bill Nye the Science Guy, who appear to "lecture" (it's about the only thing you can do in front of a camera!), but whose entire program is pretty much devoted to demonstration. Good demonstrations do more than just demonstrate, they also motivate! Most lectures don't!

Another two pennies from the verbose one...

David R. Fordham 
PBGH Faculty Fellow 
James Madison University

January 16, 2003 message from Peter French [pjfrench@CELESTIAL.COM.AU

I found this source http://www.thomson.com/swcp/gita.html  and also Duncan Williamson has some very good basic material on his sites http://duncanwil.co.uk/index.htm  ; http://www.duncanwil.co.uk/objacc.html  ;

Don't forget the world lecture hall at http://www.utexas.edu/world/lecture/  ;

This reminds me of how I learned ... the 'real learning' in the workplace...

I remember my first true life consolidation - 130 companies in 1967. We filled a wall with butchers paper and had 'callers', 'writers' and 'adders' who called out the information to others who wrote out the entries and others who did the adding. I was 25 and quite scared. The Finance Director knew this and told me [1] to stick with 'T' accounts to be sure I was making the right entry - just stick the ones you are sure in and don't even think about the other entry - it must 'balance' it out; [2] just because we are dealing with 130 companies and several hundreds of millions of dollars don't lose sight of the fact that really it is no different from the corner store. I have never forgotten the simplistic approach. He said - if the numbers scare you, decimalise them to 100,000's in your mind - it helps ... and it did. He often used to say the Dr/Cr entries out aloud

I entered teaching aged 48 after having been in industry and practice for nearly 30 years. Whether i am teaching introductory accounting, partnership formation/dissolution, consolidations, asset revaluation, tax affect accounting, I simply write up the same basic entries on the white board each session - I never use an overhead for this, I always write it up and say it out aloud, and most copy/follow me - and then recap and get on with the lesson. I always take time out to 'flow chart' what we are doing so that they never loose sight of the real picture ... this simple system works, and have never let my students down.

There have been several movements away form rote learning in all levels of education - often with disastrous consequences. It has its place and I am very proud to rely on it. This works and when it isn't broken, I am not about to try to fix it.

Good luck - it is the greatest responsibility in the world, and gives the greatest job satisfaction. It is worth every hour and every grey hair. To realise that you have enabled someone to change their lives, made a dream come true, eclipses every successful takeover battle or tax fight that I won i have ever had.

Good luck - may it be to you what is has been to me.

Peter French

January 17, 2003 reply from Michael O'Neil, CPA Adjunct Prof. Weber [Marine8105@AOL.COM

I am currently teaching high school students, some of whom will hopefully go on to college. Parents expect you to teach the children, which really amounts to lecturing, or going over the text material. When you do this they do not read the textbook, nor do they know how to use the textbook to answer homework questions. If you don't lecture then the parents will blame you for "not" teaching their children the material.

I agree that discovery is the best type of learning, and the most fun. I teach geometry and accounting/consumer finance. Geometry leans itself to discovery, but to do so you need certain materials. At our level (high school) we are also dealing several other issues you don't have at the college level. In my accounting classes I teach the debit/credit, etc. and then have them do a lot of work using two different accounting programs. When they make errors I have them discover the error and correct it. They probably know very little about posting, and the formatting of financial statements although we covered it. Before we used the programs we did a lot of pencil work.

Even when I taught accounting at the college and junior college level I found students were reluctant to, and not well prepared to, use their textbooks. Nor were they inclined to DO their homework.

I am sure that many of you have noticed a drop off in quality of students in the last years. I wish I could tell you that I see that it will change, but I do not see any effort in that direction. Education reminds me of a hot air balloon being piloted by people who lease the balloon and have no idea how to land it. They are just flying around enjoying the view. If we think in terms of bankruptcy education is ready for Chapter 11.

Mike ONeil

January 17, 2003 reply from Chuck Pier [texcap@HOTMAIL.COM

While not in accounting, I would like to share some information on my wife's experience with online education. She has a background (10 years) as a public school teacher and decided to get her graduate degree in library science. Since I was about to finish my doctoral studies and we knew we would be moving she wanted to find a program that would allow her to move away and not lose too many hours in the transfer process. What she found was the online program at the University of North Texas (UNT) in Denton. Through this program she will be able to complete a 36 hour American Library Association accredited Master's degree in Library Science and only spend a total of 9 days on campus. The 9 days are split into a one day session and 2 four day sessions, which can be combined into 1 five and 1 four day session. Other than these 9 days the entire course is conducted over the internet. The vast majority is asynchronous, but there are some parts conducted in a synchronous manner.

She has completed about 3/4 of the program and is currently in Denton for her last on campus session. While I often worry about the quality of online programs, after seeing how much work and time she is required to put in, I don't think I should worry as much. I can honestly say that I feel she is getting a better, more thorough education than most traditional programs. I know at a minimum she has covered a lot more material.

All in all her experience has been positive and this program fit her needs. I think the MLS program at UNT has been very successful to date and appears to be growing quite rapidly. It may serve as a role model for programs in other areas.

Chuck Pier

Charles A. Pier 
Assistant Professor Department of Accounting 
Walker College of Business 
Appalachian State University 
Boone, NC 28608 email:
pierca@appstate.edu  828-262-6189


Academic Whores: School Systems into Lowering Standards for Achievement Tests and Graduation
Some states are rigging achievement tests to get more money and deceive the public
Will future college graduates in President Obama's home town be able to read and divide 37/13?
But they will be college "graduates" if community colleges lower standards like their K-12 counterparts.

President Obama's American Graduation Initiative

From the Creative Commons on July 15, 2009 --- http://creativecommons.org/weblog/entry/15818

President Obama announced yesterday the American Graduation Initiative, a twelve billion dollar plan to reform U.S. community colleges. The initiative calls for five million additional community college graduates by 2020, and plans that “increase the effectiveness and impact of community colleges, raise graduation rates, modernize facilities, and create new online learning opportunities” to aid this goal.

A significant component of the initiative is the plan to “create a new online skills laboratory.” From the fact sheet,

“Online educational software has the potential to help students learn more in less time than they would with traditional classroom instruction alone. Interactive software can tailor instruction to individual students like human tutors do, while simulations and multimedia software offer experiential learning. Online instruction can also be a powerful tool for extending learning opportunities to rural areas or working adults who need to fit their coursework around families and jobs. New open online courses will create new routes for students to gain knowledge, skills and credentials. They will be developed by teams of experts in content knowledge, pedagogy, and technology and made available for modification, adaptation and sharing. The Departments of Defense, Education, and Labor will work together to make the courses freely available through one or more community colleges and the Defense Department’s distributed learning network, explore ways to award academic credit based upon achievement rather than class hours, and rigorously evaluate the results.”

It is important to note here the difference between “open” and simply accessible “online”. Truly open resources for education are clearly designated as such with a standard license that allows not only access, but the freedoms to share, adapt, remix, or redistribute those resources. The educational materials that make up the new open online courses for this initiative should be open in this manner, especially since they will result from a government plan. We are excited about this initiative and hope the license for its educational materials will allow all of these freedoms. Catherine Casserly, formerly in charge of open educational resources at the William and Flora Hewlett Foundation (now at the Carnegie Foundation for the Advancement of Teaching), writes,

“Today at Macomb College, President Barack Obama announced a proposal to commit $50 million for the development of open online courses for community colleges as part of the American Graduation Initiative: Stronger American Skills through Community Colleges. As proposed, the courses will be freely available for use as is and for adaption as appropriate for targeted student populations. The materials will carry a Creative Commons license.”

You can read the official announcement at the White House site on their blog and visit the briefing room for the full fact sheet.

Jensen Comment
Given the troublesome fact that 80% of U.S. college graduates seeking jobs could not find jobs requiring college degrees, there is much more needed that getting more students in the U.S. to graduate form college.

 

July 15, 2009 reply from AMY HAAS [haasfive@MSN.COM]

Excuse me for bringing up an often overlooked point, but getting students into community colleges is easy. Getting them to do the college level work needed to graduate is not! As a instructor at an urban community college for more than 16 years I find that they typical community college student lacks study skills and or the motivation to succeed. They will come to class but getting them do actually work outside the classroom, even with tons of online resources available is often like "pulling teeth". They do not make the time for it.

Amy Haas

July 15 reply from Flowers, Carol [cflowers@OCC.CCCD.EDU]

I am in agreement with Amy. This piece that Bob published implies to me that EVERYONE should have a college education. I think that is the problem with education. This mentality creates, once again, entitlement, not motivation. Society has taken the motivation that individuals once had, away. Why work for it when it, when it can be given to you! There is an old adage................you can lead a horse to water, but.......................................!!!

I see this as more tax dollars going to waste. I have robust epacks and online classes, and do students take advantage of it.....some do, most "don't have the time" -- they are attempting to carry full loads at two schools and work a full time job. Maybe, we should be funding time management and realistic expectations programs.

The two examples I had this Easter, were doing poorly -- one was carrying two full time jobs and a full school load; the other, two full time school loads and 1 1/2 work load . Both felt I was requiring too much and should drop my standards because of their poor time management. I worked full time and carried 12 units (no social life).............why not more units or work, because I wanted to be successful. If school takes longer than 4 years to complete, so be it. I received no help. My family couldn't afford it, so I realized if I wanted it I had to do it myself. I think many of us can tell the same story and don't feel it diminished but enhanced our motivation.

July 15, 2009 reply from Patricia Doherty [pdoherty@BU.EDU]

The "time" factor is another issue entirely, I think. Many of my students (at a 4-year private university) also have jobs, ranging from 10-hour work study to fill time or nearly so, to afford our astronomical tuition. That's become life. Should there be more options for them? Yes, I think so. Many of them are very motivated - one of my summer term students is working full time while attending school ... and has a 4.0 GPA! Her mom is a single parent with limited means, so she has to help because she wants to be at this school. My own adult daughter is back in school. Her financial aid is not full tuition. She also works nearly full time - and remains on the Dean's List. I am meantime trying to figure out this year where my husband and I will find the money to meet the rest of the tuition, because I don't want her to have to drop out. So I completely understand students who are pressed for time because of work obligations. But the ones who really want to be there find a way to use the resources available to them to succeed. For the others, the lack of time to use what you provide is an excuse, nothing more. They need to find a better reason for not doing well.

July 15, 2009 reply from Ed Scribner [escribne@NMSU.EDU]

Amy et al.,

I kind of like Zucker’s article that I may have mentioned before:

http://www.ams.org/notices/199608/comm-zucker.pdf 

Ed

Ed Scribner New Mexico State University Las Cruces, NM, USA


American RadioWorks: Testing Teachers (radio broadcast) --- http://americanradioworks.publicradio.org/features/testing_teachers/

"Good and Bad Teachers: How to Tell the Difference," by Nobel Laureate Gary Becker, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/good-and-bad-teachers-how-to-tell-the-difference-becker.html

"Rating Teachers," by Judge Richard Posner, Becker-Posner Blog, September 23, 2012 ---
http://www.becker-posner-blog.com/2012/09/rating-teachersposner.html


"GRE and SAT validity," by Stephen Hsu, Information Processing, June 8, 2011 ---
http://infoproc.blogspot.com/2011/06/gre-and-sat-validity.html

GPA-SAT correlations
"Psychometric thresholds for physics and mathematics," by Stephen Hsu and James Schombert, MIT's Technology Review, May 24, 2010 ---
http://www.technologyreview.com/blog/posts.aspx?bid=354

This is a follow up to our earlier paper on GPA-SAT correlations. Click below for the pdf.
Non-linear Psychometric Thresholds for Physics and Mathematics

ABSTRACT
We analyze 5 years of student records at the University of Oregon to estimate the probability of success (as defined by superior undergraduate record; sufficient for admission to graduate school) in Physics and Mathematics as a function of SAT-M score. We find evidence of a non-linear threshold: below SAT-M score of roughly 600, the probability of success is very low. Interestingly, no similar threshold exists in other majors, such as Sociology, History, English or Biology, whether on SAT combined, SAT-R or SAT-M. Our findings have significant implications for the demographic makeup of graduate populations in mathematically intensive subjects, given the current distribution of SAT-M scores.

 
There is clearly something different about the physics and math GPA vs SAT distributions compared to all of the other majors we looked at (see figure 1 in the paper). In the other majors (history, sociology, etc.) it appears that hard work can compensate for low SAT score. But that is not the case in math and physics.

One interesting question is whether the apparent cognitive threshold is a linear or non-linear effect. Our data suggests that the probability of doing well in any particular quarter of introductory physics may be linear with SAT-M, but the probability of having a high cumulative GPA in physics or math is very non-linear in SAT-M. See figure below: the red line is the upper bound at 95% confidence level on the probability of getting an A in a particular quarter of introductory physics, and the blue line is the probability of earning a cumulative GPA of at least 3.5 or so
.

Continued in article

Jensen Comment
Near perfection in grade averages is increasing due to grade inflation in both high school and college ---
http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor

Hence I would think SAT, ACT, GRE, GMAT, LSAT, and MCAT standardized tests would be used to further partition graduates with stellar grade averages.

Tests measure cognitive ability, but grades measure motivation as long as grade inflation does not ruin everything in education.

About ETS Research --- http://www.ets.org/research
More credit should be give to efforts made my ETS to reduce cultural and disability factors in testing.

Paying Students to Raise Text Scores ---
http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GMAT


The New GMAT:  Part 1
"The New GMAT: Questions for a Data-Rich World,: by: Alison Damast, Business Week, May 14, 2012 ---
http://www.businessweek.com/articles/2012-05-14/the-new-gmat-questions-for-a-data-rich-world

Editor’s Note: This is the first in a three-part series on the new GMAT, which makes its official debut on June 5. In this article, we examine the conceptual building blocks for the test’s new Integrated Reasoning section.

On a blustery day in February 2009, a group of nine deans and faculty members from U.S. and European business schools huddled together in a conference room in McLean, Va., at the Graduate Management Admission Council’s headquarters. They were there to discuss what would be some of the most radical changes to the Graduate Management Admission Test (GMAT) in the exam’s nearly 60-year history.

Luis Palencia, then an associate dean at Spain’s IESE Business School, was eager to press his case for the skills he thought today’s MBAs needed to have at their fingertips. Business students must be able to nimbly interpret and play with data in graphs, spreadsheets, and charts, using the information to draw swift but informed conclusions, he told his colleagues.

“The GMAT was not becoming obsolete, but it was failing to identify the skills which might be important to warrant the success of our future candidates,” he said in a phone interview from Barcelona three years later.

By the time the faculty advisory group commenced two days later, they had come up with a set of recommendations that would serve as a framework for what would eventually become the new “Integrated Reasoning” section of the Next Generation GMAT, which has been in beta testing for two years and will be administered to applicants for the first time on June 5.

Until now, the B-school entrance exam, which was administered 258,192 times worldwide in 2011, was made up of verbal, quantitative, and two writing sections. The new section, which replaces one of the writing sections, is the biggest change to the GMAT since the shift to computer-adaptive testing 15 years ago, and one that has been in the works since 2006, when GMAC first decided to revisit the exam and the skills it was testing, says Dave Wilson, president and chief executive officer of GMAC.

“At that time, we got a pretty good handle that the GMAT was working, but we wanted to know if there was anything that we weren’t measuring that would provide real value to the schools,” Wilson says.

It turned out there was a whole slew of new skills business school faculty believed could be added to the exam. The recommendations put forth by Palencia and the rest of the committee that convened in 2009 served as the conceptual building blocks for what a new section might look like. Later that year, GMAC surveyed nearly 740 faculty members around the world, from business professors to admissions officers, who agreed with many of the committee’s findings and suggested that students needed certain proficiencies to succeed in today’s technologically advanced, data-driven workplaces.

For example, they gave “high importance” ratings to skills such as synthesizing data, evaluating data from different sources, and organizing and manipulating it to solve multiple, interrelated problems, according to the Next Generation GMAC Skills Survey report.

Those are all examples of skills that can now be found on the 30-minute Integrated Reasoning section, which GMAC has spent $12 million developing over the past few years, Wilson says. It will have 12 questions and include pie charts, graphs, diagrams, and data tables. The section employs four different types of questions that will allow students to flex their analytical muscles.

Continued in article

Bob Jensen's threads on assessment are at
http://faculty.trinity.edu/rjensen/Assess.htm


"Obama’s Union-Friendly, Feel-Good Approach to Education." by Kyle Olson, Townhall, March 30, 2011 ---
http://townhall.com/columnists/kyleolson/2011/03/30/obama%E2%80%99s_union-friendly,_feel-good_approach_to_education

  • The Obama administration, principally the president and Education Secretary Arne Duncan, are now routinely making public statements which are leading to one conclusion: instead of fixing American education, we should dumb down the standards.

    According to the Associated Press, President Obama “is pushing a rewrite of the nation’s education law that would ease some of its rigid measurement tools” and wants “a test that ‘everybody agrees makes sense’ and administer it in less pressure-packed atmospheres, potentially every few years instead of annually.”

    The article goes on to say that Obama wants to move away from proficiency goals in math, science and reading, in favor of the ambiguous and amorphous goals of student readiness for college and career.

    Obama’s new focus comes on the heels of a New York Times report that 80% of American public schools could be labeled as failing under the standards of No Child Left Behind.

    Put another way: the standards under NCLB have revealed that the American public education system is full of cancer. Instead of treating the cancer, Obama wants to change the test, as if ignoring the MRI somehow makes the cancer go away.

    So instead of implementing sweeping policies to correct the illness, Obama is suggesting that we just stop testing to pretend it doesn’t exist.

    If Obama were serious about curing the disease, one of the best things he could do is to ensure that there is a quality teacher in every classroom in America. Of course, that would mean getting rid teacher tenure and scrapping seniority rules that favor burned-out teachers over ambitious and innovative young teachers.

    That means standing up to the teacher unions. For a while, it looked like Obama would get tough with the unions, but not anymore. With a shaky economy and three wars, it looks like Obama’s re-election is in serious jeopardy. He needs all hands on deck – thus the new union-friendly education message.

    Obama’s new direction will certainly make the unionized adults happy. They’ve hated NCLB from the get-go.

    And the unions will love Obama’s talk about using criteria other than standardized testing in evaluating schools.

    He doesn’t get specific, of course, but I bet I can fill in the gaps. If testing is too harsh, perhaps we can judge students and schools based on how hard they try or who can come up with the most heart-wrenching excuse for failure or how big the dog was that ate their homework.

    Continued in article

  • "Department of Injustice," by Walter E. Williams, Townhall, March 30. 2011 ---
    http://townhall.com/columnists/walterewilliams/2011/03/30/department_of_injustice

    One of the requirements to become a Dayton, Ohio police officer is to successfully pass the city's two-part written examination. Applicants must correctly answer 57 of 86 questions on the first part (66 percent) and 73 of 102 (72 percent) on the second part. Dayton's Civil Service Board reported that 490 candidates passed the November 2010 written test, 57 of whom were black. About 231 of the roughly 1,100 test takers were black.

    The U.S. Department of Justice, led by Attorney General Eric Holder, rejected the results of Dayton's Civil Service examination because not enough blacks passed. The DOJ has ordered the city to lower the passing score. The lowered passing grade requires candidates to answer 50 of 86 (58 percent) questions correctly on the first part and 64 of 102 (63 percent) of questions on the second. The DOJ-approved scoring policy requires potential police officers to earn the equivalent of an "F" on the first part and a "D" on the second. Based on the DOJ-imposed passing scores, a total of 748 people, 258 more than before, were reported passing the exam. Unreported was just how many of the 258 are black.

    Keith Lander, chairman of the Dayton chapter of the Southern Christian Leadership Conference, and Dayton NAACP president Derrick Foward condemned the DOJ actions.

    Mr. Lander said, "Lowering the test score is insulting to black people," adding, "The DOJ is creating the perception that black people are dumb by lowering the score. It's not accomplishing anything."

    Mr. Foward agreed and said, "The NAACP does not support individuals failing a test and then having the opportunity to be gainfully employed," adding, "If you lower the score for any group of people, you're not getting the best qualified people for the job."

    I am pleased by the positions taken by Messrs. Lander and Foward. It is truly insulting to suggest that black people cannot meet the same standards as white people and somehow justice requires lower standards. Black performance on Dayton's Civil Service exam is really a message about fraudulent high school diplomas that many black students receive.

    Continued in article


    Assessment often gets caught in a tug of war between accountability and improvement.
    The Next Great Hope for Measuring Learning ---
    http://www.chronicle.com/article/The-Next-Great-Hope-for/238075?cid=at&utm_source=at&utm_medium=en&elqTrackId=49382afe872f46a0b64064c090db9e53&elq=152fd248a4d244b6a1dfcf39b37cbd7c&elqaid=11117&elqat=1&elqCampaignId=4277

    Jensen Comment
    When it comes to assessment I tend to think of how I want my brain surgeon to be assessed before he sticks something hard and sharp into my gray matter. I guess the accountant in me leans toward accountability

     


    "Racial Stupidity and Malevolence," by Walter E. Williams, Townhall, September 8, 2010 ---
    http://townhall.com/columnists/WalterEWilliams/2010/09/08/racial_stupidity_and_malevolence

    The white liberal's agenda, coupled with that of black race hustlers, has had and continues to have a devastating impact on ordinary black people. Perhaps the most debilitating aspect of this liberal malevolence is in the area of education.

    Recently, I spoke with a Midwestern university engineering professor who was trying to help an inner-city black student who was admitted to the university's electrical engineering program. The student was sure that he was well prepared for an engineering curriculum; his high school had convinced him of that and the university recruiters supported that notion. His poor performance on the university's math placement exam required that he take remedial math courses. He's failed them and is now on academic probation after two semesters of earning less than a 2.0 grade point average.

    The young man and his parents were sure of his preparedness. After all, he had good high school grades, but those grades only meant that he was well behaved. The college recruiters probably knew this youngster didn't have the academic preparation for an electrical engineering curriculum. They were more concerned with racial diversity.

    This young man's background is far from unique. Public schools give most black students fraudulent diplomas that certify a 12th-grade achievement level. According to a report by Abigail Thernstrom, "The Racial Gap in Academic Achievement," black students in 12th grade dealt with scientific problems at the level of whites in the sixth grade; they wrote about as well as whites in the eighth grade. The average black high school senior had math skills on a par with a typical white student in the middle of ninth grade. The average 17-year-old black student could only read as well as the typical white child who had not yet reached age 13.

    Black youngsters who take the SAT exam earn an average score that's 70 to 80 percent of the score of white students, and keep in mind, the achievement level of white students is nothing to write home about. Under misguided diversity pressures, colleges recruit many black students who are academically ill equipped. Very often, these students become quickly disillusioned, embarrassed and flunk out, or they're steered into curricula that have little or no academic content, or professors practice affirmative-action grading. In any case, the 12 years of poor academic preparation is not repaired in four or five years of college. This is seen by the huge performance gap between blacks and whites on exams for graduate school admittance such as the GRE, MCAT and LSAT.

    Is poor academic performance among blacks something immutable or pre-ordained? There is no evidence for such a claim. Let's sample some evidence from earlier periods. In "Assumptions Versus History in Ethnic Education," in Teachers College Record (1981), Dr. Thomas Sowell reports on academic achievement in some of New York city's public schools. He compares test scores for sixth graders in Harlem schools with those in the predominantly white Lower East Side for April 1941 and December 1941.

    In paragraph and word meaning, Harlem students, compared to Lower East Side students, scored equally or higher. In 1947 and 1951, Harlem third-graders in paragraph and word meaning, and arithmetic reasoning and computation scored about the same as -- and in some cases, slightly higher, and in others, slightly lower than -- their white Lower East Side counterparts.

    Going back to an earlier era, Washington, D.C.'s Dunbar High School's black students scored higher in citywide tests than any of the city's white schools. In fact, from its founding in 1870 to 1955, most of Dunbar's graduates went off to college.

    Let's return to the tale of the youngster at the Midwestern college. Recruiting this youngster to be a failure is cruel, psychologically damaging and an embarrassment for his family. But the campus hustlers might come to the aid of the student by convincing him that his academic failure is a result of white racism and Eurocentric values.

    Bob Jensen's threads on grade inflation are at
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation


    "GRE and SAT validity," by Stephen Hsu, Information Processing, June 8, 2011 ---
    http://infoproc.blogspot.com/2011/06/gre-and-sat-validity.html

    GPA-SAT correlations
    "Psychometric thresholds for physics and mathematics," by Stephen Hsu and James Schombert, MIT's Technology Review, May 24, 2010 ---
    http://www.technologyreview.com/blog/posts.aspx?bid=354

    This is a follow up to our earlier paper on GPA-SAT correlations. Click below for the pdf.
    Non-linear Psychometric Thresholds for Physics and Mathematics

    ABSTRACT
    We analyze 5 years of student records at the University of Oregon to estimate the probability of success (as defined by superior undergraduate record; sufficient for admission to graduate school) in Physics and Mathematics as a function of SAT-M score. We find evidence of a non-linear threshold: below SAT-M score of roughly 600, the probability of success is very low. Interestingly, no similar threshold exists in other majors, such as Sociology, History, English or Biology, whether on SAT combined, SAT-R or SAT-M. Our findings have significant implications for the demographic makeup of graduate populations in mathematically intensive subjects, given the current distribution of SAT-M scores.

     
    There is clearly something different about the physics and math GPA vs SAT distributions compared to all of the other majors we looked at (see figure 1 in the paper). In the other majors (history, sociology, etc.) it appears that hard work can compensate for low SAT score. But that is not the case in math and physics.

    One interesting question is whether the apparent cognitive threshold is a linear or non-linear effect. Our data suggests that the probability of doing well in any particular quarter of introductory physics may be linear with SAT-M, but the probability of having a high cumulative GPA in physics or math is very non-linear in SAT-M. See figure below: the red line is the upper bound at 95% confidence level on the probability of getting an A in a particular quarter of introductory physics, and the blue line is the probability of earning a cumulative GPA of at least 3.5 or so
    .

    Continued in article

    Jensen Comment
    Near perfection in grade averages is increasing due to grade inflation in both high school and college ---
    http://faculty.trinity.edu/rjensen/assess.htm#RateMyProfessor

    Hence I would think SAT, ACT, GRE, GMAT, LSAT, and MCAT standardized tests would be used to further partition graduates with stellar grade averages.

    Tests measure cognitive ability, but grades measure motivation as long as grade inflation does not ruin everything in education.

    About ETS Research --- http://www.ets.org/research
    More credit should be give to efforts made my ETS to reduce cultural and disability factors in testing.

    Paying Students to Raise Text Scores ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GMAT

     


    Some states are rigging achievement tests to get more money and deceive the public
    Will future college graduates in President Obama's home town be able to read and divide 37/13?
    But they will be college "graduates" if community colleges lower standards like their K-12 counterparts.

    "Second City Ruse:  How states like Illinois rig school tests to hype phony achievement," The Wall Street Journal, July 18, 2009 --- http://online.wsj.com/article/SB124786847585659969.html#mod=djemEditorialPage

    When President Obama chose Arne Duncan to lead the Education Department, he cited Mr. Duncan's success as head of Chicago's public school system from 2001 to 2008. But a new education study suggests that those academic gains aren't what they seemed. The study also helps explain why big-city education reform is unlikely to occur without school choice.

    Mr. Obama noted in December that "in just seven years, Arne's boosted elementary test scores here in Chicago from 38% of students meeting the standard to 67%" and that "the dropout rate has gone down every year he's been in charge." But according to "Still Left Behind," a report by the Civic Committee of the Commercial Club of Chicago, a majority of Chicago public school students still drop out or fail to graduate with their class. Moreover, "recent dramatic gains in the reported number of CPS elementary students who meet standards on state assessments appear to be due to changes in the tests . . . rather than real improvements in student learning."

    Our point here isn't to pick on Mr. Duncan, but to illuminate the ease with which tests can give the illusion of achievement. Under the 2001 No Child Left Behind law, states must test annually in grades 3 through 8 and achieve 100% proficiency by 2014. But the law gives states wide latitude to craft their own exams and to define math and reading proficiency. So state tests vary widely in rigor, and some have lowered passing scores and made other changes that give a false impression of academic success.

    The new Chicago report explains that most of the improvement in elementary test scores came after the Illinois Standards Achievement Test was altered in 2006 to comply with NCLB. "State and local school officials knew that the new test and procedures made it easier for students throughout the state -- and throughout Chicago -- to obtain higher marks," says the report.

    Chicago students fared much worse on national exams that weren't designed by state officials. On the 2007 state test, for example, 71% of Chicago's 8th graders met or exceeded state standards in math, up from 32% in 2005. But results from the National Assessment of Educational Progress exam, a federal standardized test sponsored by the Department of Education, show that only 13% of the city's 8th graders were proficient in math in 2007. While that was better than 11% in 2005, it wasn't close to the 39 percentage-point increase reflected on the Illinois state exam.

    In Mr. Duncan's defense, he wasn't responsible for the new lower standards, which were authorized by state education officials. In 2006, he responded to a Chicago Tribune editorial headlined, "An 'A' for Everybody!" by noting (correctly) that "this is the test the state provided; this is the state standard our students were asked to meet." But this doesn't change the fact that by defining proficiency downward, states are setting up children to fail in high school and college. We should add that we've praised New York City test results that the Thomas B. Fordham Institute also claims are inflated, but we still favor mayoral control of New York's schools as a way to break through the bureaucracy and drive more charter schools.

    And speaking of charters, the Chicago study says they "provide one bright spot in the generally disappointing performance of Chicago's public schools." The city has 30 charters with 67 campuses serving 30,000 students out of a total public school population of 408,000. Another 13,000 kids are on wait lists because the charters are at capacity, and it's no mystery why. Last year 91% of charter elementary schools and 88% of charter high schools had a higher percentage of students meeting or exceeding state standards than the neighborhood schools that the students otherwise would have attended.

    Similar results have been observed from Los Angeles to Houston to Harlem. The same kids with the same backgrounds tend to do better in charter schools, though they typically receive less per-pupil funding than traditional public schools. In May, the state legislature voted to increase the cap on Chicago charter schools to 70 from 30, though Illinois Governor Pat Quinn has yet to sign the bill.

    Chicago Mayor Richard Daley deserves credit for hiring Mr. Duncan, a charter proponent. But in deference to teachers unions that oppose school choice, Mr. Daley stayed mostly silent during the debate over the charter cap. That's regrettable, because it's becoming clear that Chicago's claim of reform success among noncharter schools is phony.

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/assess.htm

    Bob Jensen's threads on higher education controversies are at
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm


    One Impact of Higher Admission Standards --- Less Revenue
    "New Approach at U. of Phoenix Drives Down Parent Company's Stock," Inside Higher Ed, March 30, 2011 ---
    http://www.insidehighered.com/news/2011/03/30/qt#255383

    The Apollo Group on Tuesday announced a quarterly loss and enrollment declines at the University of Phoenix that were largely attributable to changes in the for-profit institution's policies aimed at ensuring that more of the students it enrolls can succeed academically. The company's announcement of its second quarter results drove down its stock price, Bloomberg reported. Apollo saw enrollment of new students in University of Phoenix degree programs fall by 45 percent from a year ago, and said its policy of requiring new students with few academic credits to enroll in a free orientation program to see if they are cut out for college-level work had suppressed enrollments in the short term but put it "on a path of more consistently delivering high quality growth" in the future. Phoenix, as the biggest and most visible player in the for-profit higher education sector, has been under intense scrutiny amid discussion of increased federal regulation, and it has put in place a series of changes (including changing how it compensates recruiters), its officials have said, to try to lead the industry in a new direction.

    Bob Jensen's threads on for-profit universities ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud

     


    Performance Evaluation
    Let's face it! Accounting, professors' job performance, and vegetable nutrition have a lot systemic problems in common ---
    http://faculty.trinity.edu/rjensen/FraudConclusion.htm#BadNews

    American Council on Education - GED Testing --- http://www.acenet.edu/Content/NavigationMenu/ged/index.htm

    "Why I Hate Annual Evaluations," by Ben Yagoda, Chronicle of Higher Education, March 28, 2010 ---
    http://chronicle.com/article/Why-I-Hate-Annual-Evaluations/64815/

    There are three things I don't like about my job. Two of them are pretty obvious and completely unoriginal: correcting papers and attending department meetings. The third thing is somewhat obvious as well, but I hesitate to name it, for fear that it will make me look whiny.

    However, that battle has probably already been lost, so here goes: I hate my annual evaluation.

    To the extent that this evaluation is necessary, it is because of the collective-bargaining agreement between the University of Delaware and our campus chapter of the American Association of University Professors. As long as I've been here—going on 18 years—the agreement has divided our annual pay raises into two parts. The first part is across the board. This year our raise was 4 percent, of which 1.5 percent was across the board, meaning, for example, that a full professor making the minimum salary of about $85,000 got a raise of about $1,275.

    The other part of the raise is based on "merit," and it works as follows. The average faculty salary is calculated. Say it is $100,000. Every unit gets a pot of cash equivalent to 2.5 percent, or $2,500, multiplied by the number of faculty members in the unit. In my unit, the English department, that would be roughly 50 bodies. The chairman of the department evaluates each professor's performance. The professor who is precisely in the middle gets a $2,500 merit raise. Those rated higher will get more, those rated lower will get less, but the average merit raise has to be $2,500.

    In other words, no department can be a Lake Wobegon, where all the children are above average.

    On paper, this all seems reasonable, and I freely admit that part of my outsized resentment of the process stems from my own quirks. It requires a lot of paperwork and rewards good record keeping. I despise paperwork and am truly terrible at record keeping. (It is a cruel twist of fate in my world that evaluation time and tax time arrive together.) My early experience in the working world taught me that I also deeply and irrationally resent being judged by a boss, which is probably the main reason why, before becoming an academic, I was a freelance writer and thus my own boss. Now here I am being evaluated by the department chair, who isn't really my boss, but at this point the difference seems negligible.

    But I maintain that some of my gripes have objective merit. American colleges and universities, including the University of Delaware, still view faculty members as a group of scholars and teachers devoted to and bound by self-instilled standards of excellence. Tenure, as long as it continues to exist, must and does require evaluation. But—crucially—at Delaware and elsewhere, that evaluation and judgment are performed not by the chair but by one's peers (ultimately ratified or not, to be sure, by provosts, presidents, and other higher-ups).

    For faculty members who will eventually go up for tenure, it definitely makes sense to get input from as many sources as possible, so I'll grant that for them an annual evaluation by the chair makes sense. But for tenured faculty members? No—at least not the way we do it at my university.

    Every year around this time, we submit our materials—publications, syllabi, evidence of service, and so forth—and fill out a Web form. The chair, who has meanwhile received copies of students' evaluations of our teaching, rates all of us on a scale of 1 (the worst) to 9 (the best) in scholarship, service, and teaching. Different percentages are accorded to each area based on an elaborate formula, but generally speaking, for tenured and tenure-track professors, scholarship counts for roughly 50 percent, teaching 40 percent, and service 10 percent.

    The whole thing is undignified and unseemly. What, exactly, is the difference between a 5 and 7 in service? Number of committees served on? Hours spent? Scholarship is even more thorny, because as everyone knows, an article does not equal an article. Do two short articles in PMLA equal a New York Review of Books mega-essay, or do I have to throw in a draft choice and a player to be named later? Number of words produced and place of publication are important, to be sure, but quality trumps them both. And how can our chair be expected to judge the quality of the work of every faculty member, some of whom work in fields very different from his? The answer is he can't.

    Evaluating teaching has its own well-documented set of problems. We honor faculty autonomy to the extent that evaluators are not welcome in another professor's classroom, and we are still a good distance away from giving students No Child Left Behind tests that would "assess" the extent to which a certain course has achieved its "goals." That's well and good, but it doesn't leave much as a basis for judgment. There are syllabi and the narrative Teaching Statements we provide each year, and sometimes the evidence of a new course devised and designed, but the main thing used to assess teaching are student evaluations. Those have some value, but they are most assuredly not the whole story when it comes to the quality of one's teaching. If they were, we might as well outsource the whole process to RateMyProfessors.com.

    The unseemliness multiplies when my colleagues (as they often do) complain loudly and frequently about the marks they have gotten. I would be embarrassed to tell you how many laments I have listened to along the lines of, "I published a book, and he only gave me a 7!" I would bet our students don't kvetch as much about their grades.

    And what are the consequences of our evaluations? In the 50-40-10 scholarship-teaching-service ratio, the difference between a 7 and a 9 rating in scholarship is about $540 a year. After taxes, that comes out to maybe $400 a year, or $8 a week. Not only is that not much, but for almost everyone, it gets evened out over time; some years, you can expect to get maybe a little lower rating than you "really" deserve, some years a little higher. For this my colleagues gnash their teeth and lose sleep?

    Several years ago, I came up with another way to evaluate faculty performance, based on the understanding that we all expect excellent work from ourselves and one another. Take the average merit raise and give almost everyone in the department a raise slightly lower than that; in the example I've been working with, that could be $2,300. That way, a handful of colleagues who publish major books or get major awards or stellar teaching evaluations can receive a slightly higher raise. And if a couple of people are blatantly not carrying their weight, they can get a little less.

    I proposed my idea at a department meeting, and it was summarily shot down. My explanation for this is Freud's notion of the narcissism of small differences—our need to exaggerate the minimal distinctions between ourselves and people very much like ourselves.

    Even as I write, we are negotiating our next collective-bargaining agreement. Word on the street is that salaries will be frozen for next year. If that happens, I will be secretly glad, and you know why: It could very possibly mean no annual evaluation!

    Ben Yagoda is a professor of English at the University of Delaware and author, most recently, of Memoir: A History (Riverhead Books, 2009). His blog on higher education is at http://campuscomments.wordpress.com

    Bob Jensen's threads on higher education are at
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

     


    "What Gets Measured in Education," by Alan Kantrow, Harvard Business Review Blog, October 8, 2013 --- Click Here
    http://blogs.hbr.org/2013/10/what-gets-measured-in-education/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+harvardbusiness+%28HBR.org%29&cm_ite=DailyAlert-100913+%281%29&cm_lm=sp%3Arjensen%40trinity.edu&cm_ven=Spop-Email 

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm


    Assessment often gets caught in a tug of war between accountability and improvement.
    The Next Great Hope for Measuring Learning ---
    http://www.chronicle.com/article/The-Next-Great-Hope-for/238075?cid=at&utm_source=at&utm_medium=en&elqTrackId=49382afe872f46a0b64064c090db9e53&elq=152fd248a4d244b6a1dfcf39b37cbd7c&elqaid=11117&elqat=1&elqCampaignId=4277

    Jensen Comment
    When it comes to assessment I tend to think of how I want my brain surgeon to be assessed before he sticks something hard and sharp into my gray matter. I guess the accountant in me leans toward accountability.


    "Study: Little Difference in Learning in Online and In-Class Science Courses," Inside Higher Ed, October 22, 2012 ---
    http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses

    A study in Colorado has found little difference in the learning of students in online or in-person introductory science courses. The study tracked community college students who took science courses online and in traditional classes, and who then went on to four-year universities in the state. Upon transferring, the students in the two groups performed equally well. Some science faculty members have expressed skepticism about the ability of online students in science, due to the lack of group laboratory opportunities, but the programs in Colorado work with companies to provide home kits so that online students can have a lab experience.
     

     

    Jensen Comment
    Firstly, note that online courses are not necessarily mass education (MOOC) styled courses. The student-student and student-faculty interactions can be greater online than onsite. For example, my daughter's introductory chemistry class at the University of Texas had over 600 students. On the date of the final examination he'd never met her and had zero control over her final grade. On the other hand, her microbiology instructor in a graduate course at the University of Maine became her husband over 20 years ago.

    Another factor is networking. For example, Harvard Business School students meeting face-to-face in courses bond in life-long networks that may be stronger than for students who've never established networks via classes, dining halls, volley ball games, softball games, rowing on the Charles River, etc. There's more to lerning than is typically tested in competency examinations.

    My point is that there are many externalities to both onsite and online learning. And concluding that there's "little difference in learning" depends upon what you mean by learning. The SCALE experiments at the University of Illinois found that students having the same instructor tended to do slightly better than onsite students. This is partly because there are fewer logistical time wasters in online learning. The effect becomes larger for off-campus students where commuting time (as in Mexico City) can take hours going to and from campus.
    http://faculty.trinity.edu/rjensen/255wp.htm

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/Assess.htm


    Khan Academy for Free Tutorials (now including accounting tutorials) Available to the Masses ---
    http://en.wikipedia.org/wiki/Khan_Academy

    A Really Misleading Video
    Do Khan Academy Videos Promote “Meaningful Learning”?   Click Here
    http://www.openculture.com/2012/06/expert_gently_asks_whether_khan_academy_videos_promote_meaningful_learning.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29

    If you ever wondered whether professional scientists are skeptical about some of the incredibly fun, attractive and brief online videos that purport to explain scientific principles in a few minutes, you’d be right.

    Derek Muller completed his doctoral dissertation by researching the question of what makes for effective multimedia to teach physics. Muller curates the science blog Veritasium and received his Ph.D. from the University of Sydney in 2008.

    It’s no small irony that Muller’s argument, that online instructional videos don’t work, has reached its biggest audience in the form of an online video. He launches right in, lecture style, with a gentle attack on the Khan Academy, which has famously flooded the Internet with free instructional videos on every subject from arithmetic to finance.

    While praising the academy’s founder, Salman Khan, for his teaching and speaking talent, Muller contends that students actually don’t learn anything from science videos in general.

    In experiments, he asked subjects to describe the force acting upon a ball when a juggler tosses it into the air. Then he showed them a short video that explained gravitational force.

    In tests taken after watching the video, subjects provided essentially the same description as before. Subjects said they didn’t pay attention to the video because they thought they already knew the answer. If anything, the video only made them more confident about their own ideas.

    Science instructional videos, Muller argues, shouldn’t just explain correct information, but should tackle misconceptions as well. He practices this approach in his own work, like this film about weightlessness in the space station. Having to work harder to think through why an idea is wrong, he says, is just as important as being told what’s right.

     

    Jensen Comment
    In my viewpoint learning efficiency and effectiveness is so complicated in a multivariate sense that no studies, including Muller's experiments, can be extrapolated to the something as vast as the Khan Academy.

    For example, the learning from a given tutorial depends immensely on the aptitude of the learner and the intensity of concentration and replay of the tutorial.

    For example, learning varies over time such as when a student is really bad at math until a point is reached where that student suddenly blossoms in math.

    For example, the learning from a given tutorial depends upon the ultimate testing expected.
    What they learn depends upon how we test:

    "How You Test Is How They Will Learn," by Joe Hoyle, Teaching Financial Accounting Blog, January 31, 2010 ---
     http://joehoyle-teaching.blogspot.com/2010/01/how-you-test-is-how-they-will-learn.html 

    I consider Muller's video misleading and superficial.

    Here are some documents on the multivariate complications of the learning process:


    Khan Academy --- http://en.wikipedia.org/wiki/Khan_Academy

    The Trouble With Derek Muller
    The trouble with Robert Talbot is that he relies on Derek Muller's superficial experiments on undergraduates and then extrapolates the findings to the entire world. He's Exhibit A about what we warn doctoral students about when they are learning how to conduct research and write up results of research.

    In my viewpoint learning efficiency and effectiveness of any pedagogy is so complicated in a multivariate sense that no studies, including Muller's experiments, can be extrapolated to the something as vast as the Khan Academy.

    For example, the learning from a given tutorial depends immensely on the aptitude of the learner and the intensity of concentration and replay of the tutorial.

    For example, learning varies over time such as when a student is really bad at math until a point is reached where that student suddenly blossoms in math.

    For example, the learning from a given tutorial depends upon the ultimate testing expected.
    What they learn depends upon how we test:

    It all boils down to how badly a student wants to learn something like how to take the derivative of a polynomial. Chances are that if a student is totally motivated and intent on learning this process, he or she can keep studying and re-studying Khan Academy videos for mastery learning far beyond what most any other pedagogy on this subject can offer.

    The writings of Derek Muller are too superficial for my liking. Of course, learning from the Khan Academy can be superficial if the students are not intently focused on really, really wanting to learn. So what does that prove about the students who are intently focused on really, really wanting to learn?

    The Kahn Academy is really intended for students who really, really want to learn. Don't knock it just because it doesn't work as well for unmotivated students used in superficial experiments.

    A Really, Really Misleading Video
    Do Khan Academy Videos Promote “Meaningful Learning”?   Click Here
    http://www.openculture.com/2012/06/expert_gently_asks_whether_khan_academy_videos_promote_meaningful_learning.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29

    A Really Misleading Article
    "The trouble with Khan Academy," by Robert Talbert, Chronicle of Higher Education, July 3, 2012
    http://chronicle.com/blognetwork/castingoutnines/2012/07/03/the-trouble-with-khan-academy/?cid=wc&utm_source=wc&utm_medium=en

    Competency-Based Programs (where instructors do not assign the grades) Can Work Well But Do Not Always Work Well

    A Research Report
    "Competency-Based Degree Programs in the U.S. Postsecondary Credentials for Measurable Student Learning and Performance," Council on Adult and Experiential Learning," 2012 ---
    http://www.cael.org/pdfs/2012_CompetencyBasedPrograms

     


    American RadioWorks: Testing Teachers (radio broadcast) --- http://americanradioworks.publicradio.org/features/testing_teachers/


    It's About Time
    "Settlement Reached in Essay-Mill Lawsuit." by Paige Chapman, Chronicle of Higher Education, October 25, 2010 ---
    http://chronicle.com/blogs/wiredcampus/settlement-reached-in-essay-mill-lawsuit/27852?sid=wc&utm_source=wc&utm_medium=en

    Bob Jensen's threads about academic cheating ---
    http://faculty.trinity.edu/rjensen/Plagiarism.htm

    Questions
    Should a doctoral student be allowed to hire an editor to help write her dissertation? 
    If the answer is yes, should this also apply to any student writing a course project, take home exam, or term paper?

    Answer
    Forwarded by Aaron Konstam
    "Academic Frauds," The Chronicle of Higher Education, November 3, 2003 --- http://chronicle.com/jobs/2003/11/2003110301c.htm 

    Question (from "Honest John"): I'm a troubled member of a dissertation committee at Private U, where I'm not a regular faculty member (although I have a doctorate). "Bertha" is a "mature" student in chronological terms only. The scope of her dissertation research is ambiguous, and the quality of her proposal is substandard. The committee chair just told me that Bertha is hiring an editor to "assist" her in writing her dissertation. I'm outraged. I've complained to the chair and the director of doctoral studies, but if Bertha is allowed to continue having an "editor" to do her dissertation, shouldn't I report the university to an accreditation agency? This is too big a violation of integrity for me to walk away.

    Answer: Ms. Mentor shares your outrage -- but first, on behalf of Bertha, who has been betrayed by her advisers.

    In past generations, the model of a modern academician was a whiz-kid nerd, who zoomed through classes and degrees, never left school, and scored his Ph.D. at 28 or so. (Nietzsche was a full professor at 24.) Bertha is more typical today. She's had another life first.

    Most likely she's been a mom and perhaps a blue-collar worker -- so she knows about economics, time management, and child development. Maybe she's been a musician, a technician, or a mogul -- and now wants to mentor others, pass on what she's known. Ms. Mentor hears from many Berthas.

    Returning adult students are brave. "Phil" found that young students called him "the old dude" and snorted when he spoke in class. "Barbara" spent a semester feuding with three frat boys after she told them to "stop clowning around. I'm paying good money for this course." And "Millie's" sister couldn't understand her thirst for knowledge: "Isn't your husband rich enough so you can just stay home and enjoy yourself?"

    Some tasks, Ms. Mentor admits, are easier for the young -- pole-vaulting, for instance, and pregnancy. Writing a memoir is easier when one is old. And no one under 35, she has come to suspect, should give anyone advice about anything. But Bertha's problem is more about academic skills than age.

    Her dissertation plan may be too ambitious, and her writing may be rusty -- but it's her committee's job to help her. All dissertation writers have to learn to narrow and clarify their topics and pace themselves. That is part of the intellectual discipline. Dissertation writers learn that theirs needn't be the definitive word, just the completed one, for a Ph.D. is the equivalent of a union card -- an entree to the profession.

    But instead of teaching Bertha what she needs to know, her committee (except for Honest John) seems willing to let her hire a ghost writer.

    Ms. Mentor wonders why. Do they see themselves as judges and credential-granters, but not teachers? Ms. Mentor will concede that not everyone is a writing genius: Academic jargon and clunky sentences do give her twitching fits. But while not everyone has a flair, every academic must write correct, clear, serviceable prose for memos, syllabuses, e-mail messages, reports, grant proposals, articles, and books.

    Being an academic means learning to be an academic writer -- but Bertha's committee is unloading her onto a hired editor, at her own expense. Instead of birthing her own dissertation, she's getting a surrogate. Ms. Mentor feels the whole process is fraudulent and shameful.

    What to do?

    Ms.Mentor suggests that Honest John talk with Bertha about what a dissertation truly involves. (He may include Ms. Mentor's column on "Should You Aim to Be a Professor?") No one seems to have told Bertha that it is an individual's search for a small corner of truth and that it should teach her how to organize and write up her findings.

    Moreover, Bertha may not know the facts of the job market in her field. If she aims to be a professor but is a mediocre writer, her chances of being hired and tenured -- especially if there's age discrimination -- may be practically nil. There are better investments.

    But if Bertha insists on keeping her editor, and her committee and the director of doctoral studies all collude in allowing this academic fraud to take place, what should Honest John do?

    He should resign from the committee, Ms. Mentor believes: Why spend his energies with dishonest people? He will have exhausted "internal remedies" -- ways to complain within the university -- and it is a melancholy truth that most bureaucracies prefer coverups to confrontations. If there are no channels to go through, Honest John may as well create his own -- by contacting the accrediting agencies, professional organizations in the field, and anyone else who might be interested.

    Continued in the article.

    November 3, 2003 reply from David R. Fordham [fordhadr@JMU.EDU

    Bob, there are two very different questions being addressed here.

    The first deals with the revelation that “her dissertation research is ambiguous, and the quality of her proposal is substandard”.

    The editing of a manuscript is a completely different issue.

    The ambiguity of the research and the flaws with the proposal should be addressed far more forcefully than the editing issue!

    Care should be used to ensure that the editor simply edits (corrects grammar, tense, case, person, etc.), and isn’t responsible for the creation of ideas. But if the editor is a professional editor who understands the scope of his/her job, I don’t see why editing should be an issue for anyone, unless the purpose of the dissertation exercise is to evaluate the person’s mastery of the minutiae of the English language (in which case the editor is indeed inappropriate).

    Talk about picking your battles … I’d be a lot more upset about ambiguous research than whether someone corrected her sentence structure. I believe the whistle-blower needs to take a closer look at his/her priorities. A flag needs to be raised, but about the more important of the two issues.

    David R. Fordham
    PBGH Faculty Fellow
    James Madison University


    Assessment in Math and Science: What's the Point? --- http://www.learner.org/resources/series93.html



    Rubrics in Academia --- https://en.wikipedia.org/wiki/Rubric_(academic)

    "Assessing, Without Tests," by Paul Fain, Inside Higher Ed, February 17, 2016 ---
    https://www.insidehighered.com/news/2016/02/17/survey-finds-increased-use-learning-outcomes-measures-decline-standardized-tests?utm_source=Inside+Higher+Ed&utm_campaign=60a80c3a41-DNU20160217&utm_medium=email&utm_term=0_1fcbc04421-60a80c3a41-197565045

    Jensen Comment
    Testing becomes more effective for grading and licensing purposes as class sizes increase. It's less effective when hands on experience is a larger part of competency evaluation. For example, in the final stages of competency evaluation in neurosurgery testing becomes less important than expert evaluation of surgeries being performed in operating rooms. I want my brain surgeon to be much more than a good test taker. Testing is more cost effective when assigning academic credit for a MOOC mathematics course taken by over 5,000 students.

    One thing to keep in mind is that testing serves a much larger purpose than grading the amount of learning. Testing is a huge motivator as evidenced by how students work so much harder to learn just prior to being tested.

    Some types of testing are also great integrators of multiple facets of a course. This is one justification of having comprehensive final examinations.

    Testing also can overcome racial, ethnic, and cultural biases. This is the justification, for example, for having licensing examinations like CPA exam examinations, BAR examinations, nursing examinations, etc. be color blind in terms of  race, ethnic, and cultural bias. This is also one of the justifications (good or bad) of taking grading out of the jurisdiction of teachers. Competency examinations also serve a purpose of giving credit for learning no matter of how or where the subject matter is learned. Years ago people could take final examinations at the University of Chicago without ever having attended classes in a course ---
    http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/assess.htm

     

     

     



    How to Mislead With Statistics of Merit Scholars:  "Mom, Please Get Me Out of South Dakota!"
    Probabilities of Being a Merit Scholar Vary Intentionally With Geography:  The Odds are Higher in East St. Louis or Cactus Gulch, Nevada

    "Not-So-National Merit," by Ian Ayres, Freakonomics, April 4, 2014 ---
    http://freakonomics.com/2014/04/04/not-so-national-merit/

    Last December, thousands of high school sophomores and juniors learned the results of the 2013 Preliminary SAT (PSAT) test.  The juniors’ test scores will be used to determine whether they qualify as semifinalists for the prestigious National Merit Scholarship, which in turn makes them eligible for a host of automatic college scholarships(Sophomores take the test just as practice.)

    The juniors will have to wait to find out for sure if they qualify until September, just before they begin submitting applications to colleges across the country.  But it is fairly straightforward to predict, based on their scores and last year’s cutoffs, whether they will qualify as semifinalists.

    Many students would be surprised to learn that qualification depends not only on how high they score, but also on where they go to school.   The National Merit Scholarship Corporation (NMSC) sets different qualifying cutoffs for each state to “ensure that academically talented young people from all parts of the United States are included in this talent pool.”  They have not disclosed any specific criteria for setting the state cutoffs.

    A high school student’s chances of receiving the award can depend crucially on his or her state of residence.  Last year, students in West Virginia needed only a 203 to qualify as a semifinalist (scores range from 60-240), while students from Texas needed a 219 and students from Washington, D.C. a 224.  Nationally, the West Virginia score was in the 97thpercentile of scores, while the Washington DC score was at the 99.5th percentile based on a mean score of 143 and a standard deviation of 31.

    I’ve crudely estimated that because of this state cutoff discrimination, approximately 15% of students (about 2,400 students a year) who are awarded semifinalist status have lower scores than other students who were not semifinalists merely due to their geographic location.  Troublesomely, I also found that states with larger minority populations tend to have higher cutoffs.

    Instead of just complaining, I have partnered with an extraordinary high-school sophomore from New Jersey named India Unger-Harquail to try to do something about it.

    We’ve just launched a new websiteAcadiumScholar.orgYou can go to site, enter a score, and it will quickly tell you the states where your score would have qualified you as an NMSC semifinalist.

    But wait, there’s more.  The site also offers to certify qualified students based on a national standard of merit.  If you represent and warrant to us that you received a PSAT score meeting the minimum cutoff in at least one state (and you give us the opportunity to try to verify the accuracy of your score with NMSC), we’ll give you the right to describe yourself as an “Acadium Scholar.”  We’ve separately applied to the USPTO to registrar that phrase as a certification mark (in parallel fashion to my earlier “fair employment mark”).

    Instead of the yes-or-no signal offered by the NMSC, we’ll also certify students based on the number of states in which they would have qualified as semifinalists.  For example, a student who scored a 211 could be certified to describe herself as a “19-state Acadium Scholar.”

    Our certification allows:

    ·         A student from a strong cutoff-state, like Texas, who scores a 218 (just missing the Lone Star qualifying cutoff of 219) to say nonetheless that he’s a 41-state Acadium Scholar.

    ·         A student from a weak cutoff state, like North Dakota, who scores an extraordinary 235 on the exam to say that she is a 50-state Acadium Scholar.

    We’re even letting sophomores use their scores to certify so that all the pressure isn’t on junior year.  There are also some sophomores who may have scored ten points better in their sophomore than their junior year.  Now those students can certify as Acadium Scholars based on their higher scores.

    Continued in article

    Jensen Comment
    Many elite colleges in search of diversity in geography as well as race and religion admit to varying admission standards for geography. It's harder to get into Harvard from Massachusetts than it is from Wyoming or Alaska.

    Bob Jensen's threads on assessments ---
    http://faculty.trinity.edu/rjensen/Assess.htm

     

     



    Culture matters enormously. Do better analytics lead managers to "improve" or "remove" the measurably underperforming? Are analytics internally marketed and perceived as diagnostics for helping people and processes perform "better"? Or do they identify the productivity pathogens that must quickly and cost-effectively be organizationally excised? What I've observed is that many organizations have invested more thought into acquiring analytic capabilities than confronting the accountability crises they may create.
    "The Real Reason Organizations Resist Analytics," by Michael Schrage, Harvard Business Review Blog, January 29, 2013 --- Click Here
    http://blogs.hbr.org/schrage/2013/01/the-real-reason-organizations.html?referral=00563&cm_mmc=email-_-newsletter-_-daily_alert-_-alert_date&utm_source=newsletter_daily_alert&utm_medium=email&utm_campaign=alert_date

    While discussing a Harvard colleague's world-class work on how big data and analytics transform public sector effectiveness, I couldn't help but ask: How many public school systems had reached out to him for advice?

    His answer surprised. "I can't think of any," he said. "I guess some organizations are more interested in accountability than others."

    Exactly. Enterprise politics and culture suggest analytics' impact is less about measuring existing performance than creating new accountability. Managements may want to dramatically improve productivity but they're decidedly mixed about comparably increasing their accountability. Accountability is often the unhappy byproduct rather than desirable outcome of innovative analytics. Greater accountability makes people nervous.

    That's not unreasonable. Look at the vicious politics and debate in New York and other cities over analytics' role in assessing public school teacher performance. The teachers' union argues the metrics are an unfair and pseudo-scientific tool to justify firings. Analytics' champions insist that the transparency and insight these metrics provide are essential for determining classroom quality and outcomes. The arguments over numbers are really fights over accountability and its consequences.

    At one global technology services firm, salespeople grew furious with a CRM system whose new analytics effectively held them accountable for pricing and promotion practices they thought undermined their key account relationships. The sophisticated and near-real-time analytics created the worst of both worlds for them: greater accountability with less flexibility and influence.

    The evolving marriage of big data to analytics increasingly leads to a phenomenon I'd describe as "accountability creep" — the technocratic counterpart to military "mission creep." The more data organizations gather from more sources and algorithmically analyze, the more individuals, managers and executives become accountable for any unpleasant surprises and/or inefficiencies that emerge.

    For example, an Asia-based supply chain manager can discover that the remarkably inexpensive subassembly he's successfully procured typically leads to the most complex, time-consuming and expensive in-field repairs. Of course, engineering design and test should be held accountable, but more sophisticated data-driven analytics makes the cost-driven, compliance-oriented supply chain employee culpable, as well.

    This helps explain why, when working with organizations implementing big data initiatives and/or analytics, I've observed the most serious obstacles tend to have less to do with real quantitative or technical competence than perceived professional vulnerability. The more managements learn about what analytics might mean, the more they fear that the business benefits may be overshadowed by the risk of weakness, dysfunction and incompetence exposed.

    Culture matters enormously. Do better analytics lead managers to "improve" or "remove" the measurably underperforming? Are analytics internally marketed and perceived as diagnostics for helping people and processes perform "better"? Or do they identify the productivity pathogens that must quickly and cost-effectively be organizationally excised? What I've observed is that many organizations have invested more thought into acquiring analytic capabilities than confronting the accountability crises they may create.

    For at least a few organizations, that's led to "accountability for thee but not for me" investment. Executives use analytics to impose greater accountability upon their subordinates. Analytics become a medium and mechanism for centralizing and consolidating power. Accountability flows up from the bottom; authority flows down from the top.

    Continued in article

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm

    Jensen Comment
    Another huge problem in big data analytics is that the databases cannot possibly answer some of the most interesting questions. For example, often they reveal only correlations without any data regarding causality.

    A Recent Essay
    "How Non-Scientific Granulation Can Improve Scientific Accountics"
    http://www.cs.trinity.edu/~rjensen/temp/AccounticsGranulationCurrentDraft.pdf
    By Bob Jensen
    This essay takes off from the following quotation:

    A recent accountics science study suggests that audit firm scandal with respect to someone else's audit may be a reason for changing auditors.
    "Audit Quality and Auditor Reputation: Evidence from Japan,"
    by Douglas J. Skinner and Suraj Srinivasan, The Accounting Review, September 2012, Vol. 87, No. 5, pp. 1737-1765.

    Our conclusions are subject to two caveats. First, we find that clients switched away from ChuoAoyama in large numbers in Spring 2006, just after Japanese regulators announced the two-month suspension and PwC formed Aarata. While we interpret these events as being a clear and undeniable signal of audit-quality problems at ChuoAoyama, we cannot know for sure what drove these switches (emphasis added). It is possible that the suspension caused firms to switch auditors for reasons unrelated to audit quality. Second, our analysis presumes that audit quality is important to Japanese companies. While we believe this to be the case, especially over the past two decades as Japanese capital markets have evolved to be more like their Western counterparts, it is possible that audit quality is, in general, less important in Japan (emphasis added) .

     

    Bob Jensen's threads on controversies in education ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

     



    The New  GMAT

    The New GMAT:  Part 1
    "The New GMAT: Questions for a Data-Rich World,: by: Alison Damast, Business Week, May 14, 2012 ---
    http://www.businessweek.com/articles/2012-05-14/the-new-gmat-questions-for-a-data-rich-world

    Editor’s Note: This is the first in a three-part series on the new GMAT, which makes its official debut on June 5. In this article, we examine the conceptual building blocks for the test’s new Integrated Reasoning section.

    On a blustery day in February 2009, a group of nine deans and faculty members from U.S. and European business schools huddled together in a conference room in McLean, Va., at the Graduate Management Admission Council’s headquarters. They were there to discuss what would be some of the most radical changes to the Graduate Management Admission Test (GMAT) in the exam’s nearly 60-year history.

    Luis Palencia, then an associate dean at Spain’s IESE Business School, was eager to press his case for the skills he thought today’s MBAs needed to have at their fingertips. Business students must be able to nimbly interpret and play with data in graphs, spreadsheets, and charts, using the information to draw swift but informed conclusions, he told his colleagues.

    “The GMAT was not becoming obsolete, but it was failing to identify the skills which might be important to warrant the success of our future candidates,” he said in a phone interview from Barcelona three years later.

    By the time the faculty advisory group commenced two days later, they had come up with a set of recommendations that would serve as a framework for what would eventually become the new “Integrated Reasoning” section of the Next Generation GMAT, which has been in beta testing for two years and will be administered to applicants for the first time on June 5.

    Until now, the B-school entrance exam, which was administered 258,192 times worldwide in 2011, was made up of verbal, quantitative, and two writing sections. The new section, which replaces one of the writing sections, is the biggest change to the GMAT since the shift to computer-adaptive testing 15 years ago, and one that has been in the works since 2006, when GMAC first decided to revisit the exam and the skills it was testing, says Dave Wilson, president and chief executive officer of GMAC.

    “At that time, we got a pretty good handle that the GMAT was working, but we wanted to know if there was anything that we weren’t measuring that would provide real value to the schools,” Wilson says.

    It turned out there was a whole slew of new skills business school faculty believed could be added to the exam. The recommendations put forth by Palencia and the rest of the committee that convened in 2009 served as the conceptual building blocks for what a new section might look like. Later that year, GMAC surveyed nearly 740 faculty members around the world, from business professors to admissions officers, who agreed with many of the committee’s findings and suggested that students needed certain proficiencies to succeed in today’s technologically advanced, data-driven workplaces.

    For example, they gave “high importance” ratings to skills such as synthesizing data, evaluating data from different sources, and organizing and manipulating it to solve multiple, interrelated problems, according to the Next Generation GMAC Skills Survey report.

    Those are all examples of skills that can now be found on the 30-minute Integrated Reasoning section, which GMAC has spent $12 million developing over the past few years, Wilson says. It will have 12 questions and include pie charts, graphs, diagrams, and data tables. The section employs four different types of questions that will allow students to flex their analytical muscles.

    Continued in article

    "The New GMAT: Thanks, But No Thanks," Business Week, May 31, 2012 ---
    http://www.businessweek.com/articles/2012-05-31/the-new-gmat-thanks-but-no-thanks

    The future can be scary, especially if you’re headed to B-school. And if you haven’t taken the GMAT yet, the future can be downright terrifying. On June 2 the old GMAT will be consigned to the dustbin of history and replaced on June 5 (after a two-day blackout period) with a new version of the B-school entrance test. The new and improved exam replaces one of the existing writing sections with a new integrated reasoning section that apparently is giving test takers the night sweats.

    There’s been a mad rush on the part of students to register for the test before June 5. The Graduate Management Admission Council, which publishes the exam, isn’t saying exactly how mad, but if you charted test registrations it would look a lot like a bell curve. “We expected volumes to go up in April and May, and they have,” wrote GMAC spokesman Bob Ludwig in an e-mail. “Quite significantly.”

    What that means for test takers is that, according to test-prep companies, registering for the GMAT just got a lot more difficult, especially if you’ve waited until the last minute. To take the test before the big changeover, some students are driving an hour or two out of their way to less popular testing centers and taking the test mid-week rather than on the weekend.

    Andrew Mitchell, director of pre-business programs at Kaplan Test Prep, says a surge in test registrations before substantive changes is not unusual. In a recent survey, 38 percent of Kaplan GMAT students said they were trying to beat the June 2 deadline and take the old test. Many of them hadn’t even seen the new integrated reasoning questions yet—they were worried about the new section, sight unseen.

    Test takers have now had several months to eyeball the new section using sample questions supplied by GMAC and test-prep materials. Mitchell says students equate the new integrated reasoning section’s level of difficulty with that of the GMAT’s data sufficiency questions—some of the test’s toughest—which ask test takers to determine if the information supplied is enough to answer the question.

    “A business school student is generally going to want to take the easier path if there’s no disadvantage to doing so,” Mitchell says. “Integrated reasoning is all about working with data. Quant data is displayed graphically, and that’s intimidating to a lot of people. It makes sense that people would be apprehensive.”

    But it’s not like prospective MBAs were without options. It’s worth noting that the usual prescription for apprehension when it comes to the GMAT—hitting the books—was and is available for anyone contemplating the new test. Kaplan test-prep books that went on sale in January have material related to integrated reasoning, and integrated reasoning sections have been added to five of Kaplan’s nine full-length practice tests.

    At Veritas Prep, the number of website visitors using “integrated reasoning” as a search term has doubled every month since January. “We’re definitely seeing a lot of traffic,” says Brian Galvin, director of academic programs at Veritas. “It’s an exponential increase in interest.”

    Continued in article

     


    Head Start Programs

    It is now 45 years later. We spend more than $7 billion providing Head Start to nearly 1 million children each year. And finally there is indisputable evidence about the program's effectiveness, provided by the Department of Health and Human Services: Head Start simply does not work.
    "Time to Ax Public Programs That Don't Yield Results," Liberal Columnist Joe Klein, Time Magazine, July 26, 2011, Page 27 ---
    http://www.time.com/time/nation/article/0,8599,2081778,00.html 

    Barack Obama has been accused of "class warfare" because he favors closing several tax loopholes — socialism for the wealthy — as part of the deficit-cutting process. This is a curious charge: class warfare seems to be a one-way street in American politics. Over the past 30 years, the superwealthy have waged far more effective warfare against the poor and the middle class, via their tools in Congress, than the other way around. How else can one explain the fact that the oil companies, despite elephantine profits, are still subsidized by the federal government? How else can one explain the fact that hedge-fund managers pay lower tax rates than their file clerks? Or that farm subsidies originally meant for family farmers go to huge corporations that hardly need the help?

    Actually, there is an additional explanation. Conservatives, like liberals, routinely take advantage of a structural flaw in the modern welfare state: there is no creative destruction when it comes to government programs. Both "liberal" and "conservative" subsidies linger in perpetuity, sometimes metastasizing into embarrassing giveaways. Even the best-intentioned programs are allowed to languish in waste and incompetence. Take, for example, the famed early-education program called Head Start.
    (See more about the Head Start reform process.)

    The idea is, as Newt Gingrich might say, simple liberal social engineering. You take the million or so poorest 3- and 4-year-old children and give them a leg up on socialization and education by providing preschool for them; if it works, it saves money in the long run by producing fewer criminals and welfare recipients — and more productive citizens. Indeed, Head Start did work well in several pilot programs carefully run by professionals in the 1960s. And so it was "taken to scale," as the wonks say, as part of Lyndon Johnson's War on Poverty.

    It is now 45 years later. We spend more than $7 billion providing Head Start to nearly 1 million children each year. And finally there is indisputable evidence about the program's effectiveness, provided by the Department of Health and Human Services: Head Start simply does not work.

    According to the Head Start Impact Study, which was quite comprehensive, the positive effects of the program were minimal and vanished by the end of first grade. Head Start graduates performed about the same as students of similar income and social status who were not part of the program. These results were so shocking that the HHS team sat on them for several years, according to Russ Whitehurst of the Brookings Institution, who said, "I guess they were trying to rerun the data to see if they could come up with anything positive. They couldn't."
    (See how California's budget woes will hurt the state's social services.)

    The Head Start situation is a classic among government-run social programs. Why do so many succeed as pilots and fail when taken to scale? In this case, the answer is not particularly difficult to unravel. It begins with a question: Why is Head Start an HHS program and not run by the Department of Education? The answer: Because it is a last vestige of Johnson's War on Poverty, which was run out of the old Department of Health, Education and Welfare. The War on Poverty attempted to rebuild poor communities from the bottom up, using local agencies called community action programs. These outfits soon proved slovenly; often they were little more than patronage troughs for local Democratic Party honchos — and, remarkably, to this day, they remain the primary dispensers of Head Start funds. As such, they are far more adept at dispensing make-work jobs than mastering the subtle nuances of early education. "The argument that Head Start opponents make is that it is a jobs program," a senior Obama Administration official told me, "and sadly, there is something to that."

    Continued in article


    Assessment in Math and Science: What's the Point? --- http://www.learner.org/resources/series93.html


    Assessment by Ranking May Be a Bad Idea 

    An interesting article on forced performance rankings (might be read as grading) ---
    Olympics 1, AIG 0: Why Forced Ranking Is a Bad Idea --- Click Here
    http://blogs.hbr.org/bregman/2010/02/olympics-1-aig-0-why-forced-ra.html?cm_mmc=npv-_-DAILY_ALERT-_-AWEBER-_-DATE

    Jensen Comment
    I think some readers fail to see the importance of just what the title means when it reads “Olympics 1, AIG 0."

    They're apt to look for some relationship between the Olympics and AIG. There may well be some very obscure relationship, but that’s not the point.

     

    February 19, 2010 reply from David Albrecht [albrecht@PROFALBRECHT.COM]

    Bob,

    This is one of the most interesting stories you've passed along in quite a while. I especially like the part of the article that says once a ranking criterion is selected, all other tasks an employee might perform (such as learning/training) are counter productive. I think this is a situation very present in academe. GPA becomes an important metric for students quest for either employment or graduate school after graduation with a BSBA. If GPA is the primary criterion for awarding entry and scholarships, than any activity a student takes that could result in a lower grade is to be avoided at all costs.

    Moreover, learning within a course is a multivariate activity. I can think of memorization, application, affectation and personal growth. If a professor is untrained in education (and most biz profs are), professor selection of inappropriate grading criteria can place a huge cost on students.

    David Albrecht

    February 19, 2010 reply from James R. Martin/University of South Florida [jmartin@MAAW.INFO] (I combined two replies)

    According to Deming:
    Annual reviews and ranking employees indicates the absence of a knowledge of variation and an absence of an understanding of the system. A manager who understands variation would not rank people because he or she would understand that ranking people merely ranks the effect of the system on the people. This causes tampering & destroys motivation and teamwork.

    See http://maaw.info/DemingMain.htm for Deming's theory of management.

     This hit one of my buttons. The point: There is nothing wrong with ranking people in games. Some one wins and someone losses. But life, business, and education are not games. Everyone can win if they cooperate and work together. Ranking people prevents them from doing that and causes winners and losers in the short run. In the long run, everyone losses.

    February 20, 2010 reply from Francine McKenna [retheauditors@GMAIL.COM]

    Bob/Dave  
    Agree wholeheartedly.  I've written a lot about forced ranking for partners on down and the negative effect it's had on professionalism and morale in the Big 4.  They've followed their big ideal client GE into the abyss.

    http://retheauditors.com/2009/11/05/live-our-values-demonstrate-our-behaviors-support-our-strategy/

    http://retheauditors.com/2009/08/12/goingconcern-ratings-raises-and-promotions-forced-ranking-in-the-big-4/

    http://retheauditors.com/2007/06/26/when-is-a-layoff-not-a-layoff/

    Francine

    February 19, 2010 reply from Bob Jensen

    And I forgot to cringe properly when remembering all the times I thought I was making the job easier when I had students rank each other’s term papers --- because I thought ordinal-scale ranking would be easier for them than assigning a letter grade or ratio-scaled score. Ratio scales differ from interval scales by having a common zero point, which is what makes correlations different from covariances.

    In small graduate classes I thought it would be a learning exercise for students to both read each others’ papers and rank them. Students were asked not to rank their own papers in the set of submitted rankings.

    However, for grading purposes I graded the papers before I read the student rankings. I reserved the right to only mark a paper’s grade upward after reading the student commentaries that accompanied their rankings. I suspect I would’ve graded downward as well if plagiarism was detected by student rankers, but not once in my career did a student ranker ever disclose a case of plagiarism.

    Still, I’m now wondering about the propriety of making students rank papers.

    Bob Jensen


    If a student doesn’t come to school,” he continued, “how can you justify passing that kid?
     Fernanda Santos

    "Bronx School’s Top Ranking Stirs Wider Doubts About Rating System," by Fernanda Santos, The New York Times, January 20, 2011 ---
    http://www.nytimes.com/2011/01/21/education/21grades.html?_r=1&hpw

    One of the trademarks of New York City’s school accountability system is an equation that assigns every school a letter grade, A through F, based on a numerical score from 1 to 100.

    Bronx School’s Top Ranking Stirs Wider Doubts About Rating System By FERNANDA SANTOS Published: January 20, 2011

    Recommend Twitter Sign In to E-Mail Print Reprints Share

    One of the trademarks of New York City’s school accountability system is an equation that assigns every school a letter grade, A through F, based on a numerical score from 1 to 100. Enlarge This Image Marcus Yam for The New York Times

    Lynn Passarella, facing camera, the principal of the Theater Arts Production Company School, outside the school on Thursday. She declined to comment on the allegations about her school’s grading practices.

    A parent pulling up the latest report card for the Theater Arts Production Company School in the Bronx would find that it earned the score of 106.3 (including extra credit).

    But that very empiric-sounding number, which was the highest of any high school in the city, is based in part on subjective measures like “academic expectations” and “engagement,” as measured by voluntary parent, teacher and student surveys.

    And, according to some teachers at the school, even the more tangible factors in the score — graduation rates and credits earned by students — were not to be taken at face value. The school has a policy that no student who showed up for class should fail, and even some who missed many days of school were still allowed to pass and graduate.

    The Department of Education, which revealed on Wednesday that it was investigating grading practices at the school, says that it has a team devoted to analyzing school statistics every year and looking for red flags like abnormal increases in student scores or dropout rates. But a department official said that nothing in its data had raised suspicions about the school, known as Tapco, until a whistle-blower filed a complaint in October.

    Still, in a data-driven system where letter grades can determine a school’s fate, one big question looms over the investigation: If the allegations turn out to be true, are they an exception or a sign of a major fault in the school accountability system?

    “The D.O.E. has absolutely created a climate for these types of scandals to happen,” Michael Mulgrew, the president of the teachers’ union, said in an interview. “Their culture of ‘measure everything and question nothing a principal tells you’ makes it hard to figure out what’s real and what’s not real inside a school.”

    There are many gradations of impropriety, and it is unclear if any of them apply to Tapco, which has about 500 students and also includes a middle school. The school’s teacher handbook states that no student should fail a class if he or she regularly attends, and that students who miss work should be given “multiple opportunities for student success and work revision.”

    Current and former teachers at the school said that even students who were regularly absent were given passing grades, in some cases with course credits granted by the principal without a teacher’s knowledge. Some students’ records showed credits for courses the school did not offer.

    The investigation over the irregularities at Tapco, which began in October, also include allegations that the school’s principal, Lynn Passarella, manipulated teacher and parent surveys, which represent 10 of the 100 points in a school’s score. Graduation rates, passing rates on Regents exams and earned credits constitute most of the score.

    Ms. Passarella declined to comment on the allegations.

    A spokesman for the Education Department, Matthew Mittenthal, said: “We take every allegation of misconduct seriously, and hope that the public can reserve judgment until the investigation is complete.”

    Sometimes, the analysts who pore over the data uncover serious problems. Last year, the Education Department lowered the overall scores of three high schools. At Jamaica High School in Queens, the department discovered that the school had improperly granted credit to some transfer students. At John F. Kennedy High School in the Bronx and W. H. Maxwell Career and Technical Education High School in Brooklyn, administrators could not provide documentation to explain why some students had left the schools.

    Since 2008, at least four principals and assistant principals have been reprimanded — two retired, one served a 30-day unpaid suspension and another paid a $6,500 fine — on charges that included tampering with tests.

    Principals can get as much as $25,000 in bonuses if their schools meet or exceed performance targets, and some experts are skeptical that the department’s system of checks and balances is as trustworthy as it should be, particularly when money is at stake.

    Tapco’s administrators got a bonus once, for the 2008-9 school year, when the high school’s overall score was 85.8, which earned it an A. (The middle school scored 73.) Ms. Passarella received $7,000, while her assistant principals got $3,500 each, according to the Education Department. (Administrator bonuses for 2009-10 performance have not been doled out.)

    “There’s an inherent temptation towards corruption when you create a situation where there are rewards for things like higher test scores or favorable surveys,” said Sol Stern, an education researcher at the Manhattan Institute, a conservative research group. “It’s an invitation to cheating.”

    One mother, Cathy Joyner, whose daughter, Sapphire Connor, is a junior, said the school was excellent, adding that “the children are respectful” and that the school was “concentrating on their talents.”

    But one teacher, who spoke on condition of anonymity because he said he feared for his job, gave a different account. For teachers who do not do what the principal wants, the teacher said, “it’s difficult to get tenure.”

    “If a student doesn’t come to school,” he continued, “how can you justify passing that kid?"

    Wow:  97% of Elementary NYC Public Students Get A or B Grades --- There must be higher IQ in the water!
    "City Schools May Get Fewer A’s," by Jennifer Medina, The New York Times, January 28, 2010 ---
    http://www.nytimes.com/2010/01/30/education/30grades.html?hpw

    Michael Mulgrew, the president of the United Federation of Teachers, criticized the decision to reduce the number of schools that receive top grades.

    Continued in article

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/assess.htm


    "Colleges (i.e., prestigious colleges) With Lenient Grades Get Six Times as Many Grads Into B-School (and jobs)," by Louis Lavelle, Bloomberg Businessweek, July 30, 2013 ---
    http://www.businessweek.com/articles/2013-07-30/colleges-with-lenient-grades-get-six-times-as-many-grads-into-b-school

    Link to the Study  --- http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0069258

    Abstract
    When explaining others' behaviors, achievements, and failures, it is common for people to attribute too much influence to disposition and too little influence to structural and situational factors. We examine whether this tendency leads even experienced professionals to make systematic mistakes in their selection decisions, favoring alumni from academic institutions with high grade distributions and employees from forgiving business environments. We find that candidates benefiting from favorable situations are more likely to be admitted and promoted than their equivalently skilled peers. The results suggest that decision-makers take high nominal performance as evidence of high ability and do not discount it by the ease with which it was achieved. These results clarify our understanding of the correspondence bias using evidence from both archival studies and experiments with experienced professionals. We discuss implications for both admissions and personnel selection practices.

    . . .

    General Discussion

    Many studies in the social psychology and organizational behavior literatures have found that people tend to attribute too much influence to disposition and too little influence to situational factors impinging on the actor when explaining others' behaviors, achievements, and failures. This common tendency, labeled the correspondence bias or the fundamental attribution error, has been shown to be robust across a variety of contexts and situations. Yet, to date, most of the evidence about this bias comes from laboratory experiments with college students as participants, and its implications for field settings and organizational outcomes are seldom examined. Using data from both the experimental laboratory and the field, we extend prior research by investigating whether this tendency leads experienced professionals to make systematic mistakes in their selection decisions, favoring alumni from academic institutions with higher grade distributions and employees working in favorable business climates. Our results indicate that candidates who have demonstrated high performance thanks to favorable situations are more likely to be rated highly and selected. Across all our studies, the results suggest that experts take high performance as evidence of high ability and do not sufficiently discount it by the ease with which that performance was achieved. High grades are easier to achieve in an environment where the average is high and so are less indicative of high performance than are the same grades that were earned from an institution with lower grades on average. Sky-high on-time percentages should be less impressive at an airport that was running well before the manager got there. Although we focused on two selection scenarios, we believe the results speak to other selection and evaluation problems.

    Indeed, we see consistent evidence of situation neglect in contexts where political and business leaders are credited with performance that derives directly from stochastic economic factors. Voters face a Lewinian dilemma when they evaluate the performance of incumbent politicians running for re-election. They should reward politicians who create positive change for their constituencies while considering what portion of those changes were due to lucky or exogenous factors. Wolfers [41] finds that voters, like our admissions professionals and executives, favor politicians that had the good luck to work under favorable conditions. Voters are more likely to reelect incumbents after terms marked by positive national economic trends or (in the case of oil-rich states) high oil prices. CEOs also benefit from fortuitous economic conditions for which they are not responsible. Bertrand and Mullainathan [42] present evidence that CEO compensation is driven to equal degrees by their management and the uncontrollable economic conditions in which they managed. Stakeholders in these cases have strong incentives to reward leaders who add value above the vagaries of the economy, but they seem blind to the difference.

    It is often the case that structural and situational factors are the most powerful influences on behavior. Within organizations, for example, it is easier to succeed in some jobs than in others [43]. Sometimes people will achieve positive outcomes simply because of a beneficent environment. It is easier to achieve success as a manager when your team is strong than when your team is weak. Likewise, it is easier to obtain a strong education in an excellent private school than in an under-funded public school. And it is easier to achieve high grades at schools where higher grades are the norm. So it would be a mistake to neglect situational effects on performance, but that is what our data suggest that even experts and professionals tend to do.

    Are we always doomed to make erroneous correspondent inferences? Evidence suggests not; the bias is subject to a number of moderating factors. These are useful to consider both because they provide clues about the psychological mechanisms at work and because they suggest potential debiasing treatments. For instance, when people are stressed, distracted, or busy, they are more likely to fall victim to the correspondence bias [44]. Those with greater capacity for reflective thought, as measured by need for cognition, are less likely to show the bias [45]. When people feel accountable to others, they are less likely to show the bias [46]. When people are in good moods, they appear more likely show the bias [47]. And some collectivistic cultures may be less vulnerable to the correspondence bias than individualistic ones [48], [49].

    Organizations often adopt practices because they are legitimate, popular, or easy to justify [50], [51]. That may help explain why we observed such consistency in admissions policies in neglecting to consider differences in grade distributions between institutions. This sort of consistency in organizational “best” practices can create incentives for individuals to play along, despite their imperfections. Indeed, it is even conceivable that cultural or linguistic norms can make it easier for individuals to follow decision norms that are more easily understood by or explained to others. On the other hand, it is reasonable to assume that finding a better system to evaluate applicants would improve admissions decisions, allowing the schools that do it to identify strong candidates that other schools neglect. The Oakland Athletics baseball team did just this when it pioneered a new statistical approach to identifying promising baseball players to recruit [52]. Their success has since been emulated by other teams, changing the way baseball's talent scouts pick players. However, the problem for admissions departments may be more complicated because explicitly tarring some institutions as lenient-grading is likely to elicit energetic protests if they ever find out about it [53].

    It is common in organizations for the abilities of an individual, a department, or a division to be shrouded in complicating or confounding influences that make them difficult to detect or measure [54]. Indeed, as much as ratings systems like grades and performance metrics like on-time percentages can help clarify standards for evaluation, they can also be used to obscure performance [55]. Variation in grading standards between institutions obscures the value of using grades to measure student performance. It is probably in the interest of lenient-grading institutions to hide the degree of their leniency. Consistent with this motive, recent years have seen changes in the disclosure that institutions are willing to make [56]. Fewer academic institutions are willing to disclose average grading data or class rankings for their students or alumni. When we contacted institutions to inquire regarding average grades elite, expensive, private institutions – those with the highest average grades – were most likely to decline to disclose the information.

    Organizational Image, Legitimacy, and Stakeholder Appraisals

    The strategic use of scoring and assessment metrics has implications at the organization level because of the way that institutions compete. Scott and Lane [57] advanced a theory of organizational image in which stakeholders (both members as well as outside audiences) play a key role in shaping the organization's image by making legitimacy appraisals that can counterbalance the organization's attempts at image management. This model is built on the dual premises that organizations and their members derive personal and economic benefits from promoting a positive image [58], [59], but that salient audiences have a role in validating that image [60], [61]. These forces form an equilibrium that balances the organization's incentives for an unbounded positive spin with the utility gained by stakeholders from an image grounded in reality. Scott and Lane [57] term the specific mechanism by which this equilibrium is reached reflected stakeholder appraisals. In the present paper we have investigated a setting in which stakeholders may have difficulty judging the appropriateness of image-relevant information which could then threaten the stability of the reflected stakeholder appraisal equilibrium.

    In the context of higher education, graduating students are among the primary interfaces through which employers, graduate schools, and communities interact with undergraduate institutions. Their reputation in the form of grades contributes to the reputation [62] of the organization. As such, undergraduate institutions have an incentive to promote an image of intelligence and achievement to these outside audiences by maintaining a relatively high grade distribution. Given the tremendous value of being able to place alumni in better graduate schools and in better jobs, universities cannot be expected to go too far in seeking to curtail grade inflation. For example, universities are unlikely to implement meaningful institutional changes such as replacing grades with percentile rankings. Instead, we should expect academic institutions to pay lip service to the importance of high academic standards while at the same time avoiding publicizing average grade distributions and avoiding reporting class rank data on their students.

    Do we see unchecked escalation of grade distributions by a market full of organizations unconstrained by the critical feedback from shareholders? Of course, there are multiple mechanisms supporting a moderate equilibrium even without functioning shareholder criticism of the type we have described, but some data suggest grade inflation is a prolonged and significant trend in U.S. Education [6]. More troubling are anecdotal reports of institutions manipulating their grade distribution with the publicly expressed intent of influencing the selection decisions of hiring firms [63]. Clearly, these institutions are anticipating that employers will not sufficiently discount the grades of their alumni to eliminate the advantage their inflated grades will confer.

    Limitations and Directions for Future Research

    Our studies are subject to several important limitations. First, the sample used in our first study was relatively small due to the size of the admissions department that participated, even though the results were highly significant. In addition, the first and second studies employed hypothetical decisions, which may have limited validity as a model of fully consequential and incentivized decision making. Future research could benefit from a more qualitative research approach to investigate how admissions and promotion decisions are made by various organizations. As for Study 3, there are many variables (such as variations in average GPA by discipline within a school) for which we did lacked information and thus could not control in our analyses. These variables may have important influences on admission decisions that are not captured in the present research. Although these are important limitations, it is also worth noting that the limitations differ across studies and yet the findings are robust.

    The conclusions implied by our results as well as the limitations of our research bring forth some fruitful and interesting possible avenues for future research. One interesting question is whether other academic selection contexts would show the same patterns as business school admissions decisions. Law schools, for instance, use the Law School Admissions Council, an organization that (among other things) processes applications for law schools and provides a service that gives schools a sense of where a given applicant's GPA falls relative to other applicants that the LSAC has seen from that same institution. The Graduate Management Admissions Council does not process business school applications and so does not provide an equivalent service for business schools. Does the LSAC's assistance help law schools make better admissions decisions?

    Similarly, future research could explore the implications of the correspondence bias for promotions of business professionals. Just as educational institutions vary with respect to the ease of achieving high grades, so do companies, industries, and time periods differ with respect to the ease of achieving profitability. There are some industries (such as airlines) that are perennially plagued by losses and whose firms have trouble maintaining profitability. There are other industries (such as pharmaceuticals) that have seen more stable profitability over time. And clearly there are changes over time in industry conditions that drive profitability; for example, global oil prices drive profitability among oil companies.

    We believe an important avenue for further investigation lies in continuing the study of the correspondence bias in empirical settings with organizationally-relevant outcomes. A more thorough understanding of the implications of this common bias for organizations could be achieved by further investigating business decisions such as promotions. There are also a multitude of other business decisions in which a latent variable of interest is seen in the context of varying situational pressures. Investment returns, sports achievements, and political success are all domains in which judgments are vulnerable to the tendency to insufficiently discount the influence of the situation. We expect that the correspondence bias affects outcomes in these domains.

    Our theory holds that a firm's good fortune (in the form of greater profits) will be mistaken as evidence for the abilities of its managers. If this is so, then we should more often see employees of lucky firms being promoted than of unlucky firms [64]. We would expect, for instance, that pharmaceutical executives are more likely to be hired away to head other firms than are airline executives. However, this finding might be vulnerable to the critique that pharmaceutical executives actually are more capable than are airline executives–after all, their firms are more consistently profitable. Therefore, a better way to test this prediction would be using an industry (such as oil) in which fortunes fluctuate over time due to circumstances outside the control of any firm's managers. Our prediction, then, would be that oil executives are more likely to be hired away to head other firms when the oil industry is lucky (i.e., oil prices are high) than when the industry is unlucky (i.e., oil prices are low).

    Theoretical Contributions

    Our results contribute to the literature on the psychological process at work in comparative judgment, a literature that stretches across psychology [65], economics [66], and organizational behavior [67]. In this paper, we extend previous research by examining judgmental contexts in which expert decision-makers are comparing outcomes that vary with respect to both nominal performances and their ease. We should also point out that these results are, in a number of ways, more dramatic than the results of previous research showing biases in comparative judgment. Previous results have been strongest when participants themselves are the focus of judgment [65], [68]. Biases in comparative judgment shrink when people are comparing others, and shrink still further when they have excellent information about performance by those they are comparing [69]. Biases disappear when comparisons are made on a forced ranking scale [70]. In this paper, we have shown comparative judgments to be powerfully biased even when people are evaluating others about whom they have complete information (as modeled in Study 1), and even when the assessments (e.g., admission decisions) are made on a forced distribution that prevent them from rating everyone as better than everyone else.

    Continued in article


    Chronicle of Higher Education:  Students Cheat. How Much Does It Matter?
    Click Here

    . . .

    Trust your students, the pedagogical progressives advise, and they’ll usually live up to it. But that has not been Ajay Shenoy’s experience. In March, Shenoy, an assistant professor of economics at the University of California at Santa Cruz, relaxed the expectations for his winter-quarter final, making it open note and giving students more time.

    That hadn’t been Shenoy’s first impulse. Initially, he thought he might make it harder to cheat by letting students view just one question at a time, and randomizing the order of questions. The test would be timed, and everyone would take it at once.

    Then his students started to go home, and home was all over the world. Between time zones and air travel, there was no way he could expect them to all find the same two hours for an exam. Besides, he realized, his students were, understandably, incredibly stressed.

    Still, Shenoy required students to do their own work. He even asked them to let him know if they heard about anyone cheating.

    After the exam, a couple of students came forward. One had heard about classmates putting test questions on Chegg. Another was pretty sure his housemates had cheated off their fraternity brothers. Alarmed, Shenoy decided to investigate. In his research, Shenoy uses natural-language processing to detect signs of political corruption. So to understand the scope of the cheating, he wrote a simple computer program to compare students’ exam responses. He uncovered an amount of cheating he calls “stunning.”

    It also bothered Shenoy that it seemed to be common knowledge among his students that a number of their classmates were cheating.

    “This is the issue when people say you should just trust students more,” Shenoy says. “Even if 99 percent of the students don’t want to cheat, if that 1 percent is cheating — and if everyone else knows about it — it’s a prisoner’s dilemma, right?” Students who are honest know they are at a disadvantage, he says, if they don’t think the professor is going to enforce the rules.

    So Shenoy enforced the rules. He investigated 20 cases in his class of 312, and filed academic-misconduct reports for 18. (Those weren’t the only students who cheated, Shenoy says. Through documentation he got from Chegg, he knows many more students turned to the site. But he had time to pursue only students who had submitted questions to it.)

    In-person exam cheating, Shenoy thought, is ineffective, and probably doesn’t boost students’ grades all that much — certainly no more than, well, studying more.

    But when he compared the grades of students who had cheated with those of their classmates who didn’t, he found that the cheaters scored about 10 points higher on the exam. “I guess it’s possible that the smarter students were also the ones who chose to cheat,” Shenoy says. “But usually, in my experience, it’s the other way around.”

    Who’s hurt when students cheat? It’s their loss, some professors will argue. It’s the cheaters who’ve squandered their tuition payment, time, and opportunity to learn the material. Besides, their actions will probably catch up to them eventually. That’s not how Shenoy views it, though.

    If cheating leads to a higher grade, says the economist, then cheating is rational. “This was actually quite valuable to the student,” Shenoy says. “At the expense of the other students.”

    So Shenoy felt a responsibility. “Part of my reason for putting so much time into pursuing this,” he says, “was just out of a sense of justice for the other students.

    Continued in article

    Jensen Comment

    I continued to repeat my example of the 60+ students who were expelled for cheating in a political science class where every student was assured of getting an A grade in the course if they did the homework. Many reported they cheated (in this case plagiarized) because when they were assured of an A grade irrespective of effort then their time was better spent on courses where they were not assured of an A grade.

    When some of students took my courses on a pass-fail basis seldom was their performance on homework, term papers, and exams nearly as good as most of my students taking the course for a letter grade. The pass-fail students seemingly did not put the time and effort into learning as the students who worked overtime for an A or B grade


    Chronicle of Higher Education:  Seven Ways to Assess Students Online and Minimize Cheating ---
    Click Here

  • Break up a big high-stakes exam into small weekly tests.

    Start and end each test with an honor statement

    Ask students to explain their problem-solving process

    Get to know each student’s writing style in low- or no-stakes tasks

    Assess learning in online discussion forums

    Don’t base grades solely on tests

    Offer students choice in how they demonstrate their knowledge.

    Jensen Comment
    If you base grades almost entirely upon examinations, make students take those examinations in some type of testing center or have the exams proctored locally.

     


  • Concept Knowledge and Assessment of Deep Understanding

    Competency-Based Learning (where teachers don't selectively assign grades) --- https://en.wikipedia.org/wiki/Competency-based_learning

    Critical Thinking --- https://en.wikipedia.org/wiki/Critical_thinking

    Over 400 Examples of Critical Thinking and Illustrations of How to Mislead With Statistics ---
    http://faculty.trinity.edu/rjensen/MisleadWithStatistics.htm

    Western Governors University (with an entire history of competency-based learning) ---- http://www.wgu.edu/

    From a Chronicle of Higher Education Newsletter on November 3, 2016

    Over the past 20 years, Western Governors University has grown into a formidable competency-based online education provider. It’s on just its second president, Scott D. Pulsipher, a former Silicon Valley executive, who stopped by our offices yesterday.

    WGU has graduated more than 70,000 students, from all 50 states. But a key part of the institution’s growth strategy is local, using its affiliations with participating states (not that all the partnerships start smoothly, mind you). There are six of them, and more growth is on the way; Mr. Pulsipher says WGU is in serious discussions to expand into as many as five more states — he declines to name them — at a pace of one or two per year.

    The university's main focus remains students, he says. One example is an effort to minimize student loans. Through better advising, students are borrowing, on average, about 20 percent less than they did three years ago, amounting to savings of about $3,200. “Humans make better decisions,” Mr. Pulsipher says, “when they have more information.” —Dan Berrett

    Western Governors University is a Leading Competency-Based Learning University ---
    https://en.wikipedia.org/wiki/Western_Governors_University

    Here’s How Western Governors U. Aims to Enroll a Million Students ---
    https://www.chronicle.com/article/Here-s-How-Western-Governors/243492?cid=at&utm_source=at&utm_medium=en&elqTrackId=0fe6b239932845ee9da44c2fa67cdf5f&elq=885d6ac654144af5aff9430a4640932d&elqaid=19192&elqat=1&elqCampaignId=8710

     2016 Bibliography on Competency-Based Education and Assessment ---
    https://www.insidehighered.com/quicktakes/2016/01/26/rise-competency-based-education?utm_source=Inside+Higher+Ed&utm_campaign=0f02e8085b-DNU20160126&utm_medium=email&utm_term=0_1fcbc04421-0f02e8085b-197565045

    Bob Jensen's threads on   Competency-Based Education and Assessment ---
    See Below


    Competency-Based Learning --- https://en.wikipedia.org/wiki/Competency-based_learning

    EDUCAUSE:  Competency-Based Education (CBE) ---
    https://library.educause.edu/topics/teaching-and-learning/competency-based-education-cbe


    Mathematics Assessment Project (learning assessment) ---http://map.mathshell.org


    Educause:  2016 Students and Technology Research Study ---
    https://library.educause.edu/resources/2016/6/2016-students-and-technology-research-study

    This hub provides findings from the 2016 student study, part of the EDUCAUSE Technology Research in the Academic Community research series. ECAR collaborated with 183 institutions to collect responses from 71,641 undergraduate students across 25 countries about their technology experiences. This study explores technology ownership, use patterns, and expectations as they relate to the student experience. Colleges and universities can use the results of this study to better engage students in the learning process, as well as improve IT services, increase technology-enabled productivity, prioritize strategic contributions of IT to higher education, plan for technology shifts that impact students, and become more technologically competitive among peer institutions.

    Bob Jensen's Education Technology Threads ---
    http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Educause:  Competency-based Education (CBE)
    https://library.educause.edu/topics/teaching-and-learning/competency-based-education-cbe

    The competency-based education (CBE) approach allows students to advance based on their ability to master a skill or competency at their own pace regardless of environment. This method is tailored to meet different learning abilities and can lead to more efficient student outcomes. Learn more from the Next Generation Learning Challenges about CBE models and grants in K-12 and higher education. 

    Organizations

    ·   CBEinfo - This site was created for schools to share lessons learned in developing CBE programs.

    ·   Competency-Based Education Network (CBEN)

    ·   CAEL Jumpstart Program

    ·   CompetencyWorks

    Competency Definition

    ·   Competency-Based Learning or Personalized Learning. This U.S. Department of Education topic page includes links to various states and districts putting CBL programs into action.

    ·   Principles for Developing Competency-Based Education Programs. Change Magazine, April/March 2014. Sally M. Johnstone and Louis Soares

    ·   The Degree Qualifications Profile, Lumina

    Bob Jensen's competency-based learning threads ---
    http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge


     

    Critical Thinking --- https://en.wikipedia.org/wiki/Critical_thinking

    What is Critical Thinking Anyway?
    https://chroniclevitae.com/news/1691-what-is-critical-thinking-anyway?cid=wb&utm_source=wb&utm_medium=en&elqTrackId=b1a00d70cdda451babcad48a0b78f4fa&elq=dc026b5ac5f247e4a5cadb81f89631c7&elqaid=12462&elqat=1&elqCampaignId=5069

    32 Animated Videos by Wireless Philosophy Teach You the Essentials of Critical Thinking ---
    http://www.openculture.com/2016/07/wireless-philosophy-critical-thinking.html

    Authentic Assessment Toolbox (critical thinking assessments) --- http://jfmueller.faculty.noctrl.edu/toolbox/index.htm
    Also see http://faculty.trinity.edu/rjensen/assess.htm

    Carl Sagan’s Syllabus & Final Exam for His Course on Critical Thinking (Cornell, 1986)  ---
    http://www.openculture.com/2018/01/carl-sagans-syllabus-final-exam-for-his-course-on-critical-thinking-cornell-1986.html?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+OpenCulture+%28Open+Culture%29


    Purdue University's New Competency-Based Undergraduate Degree

    "Competency for the Traditional-Age Student," by Paul Fain, Chronicle of Higher Education, March 30, 2016 ---
    https://www.insidehighered.com/news/2016/03/30/purdue-u-gets-competency-based-education-new-bachelors-degree?utm_source=Inside+Higher+Ed&utm_campaign=8b78e204e3-DNU20160330&utm_medium=email&utm_term=0_1fcbc04421-8b78e204e3-197565045

    Accreditor approves Purdue's new competency-based bachelor's degree, which blends technical disciplines with the humanities and has a customizable approach designed more for a career than a first job.

    Competency-based education isn’t for everyone, say even supporters of the emerging form of higher education.

    Many of the 600 or so colleges that are trying to add competency-based degrees are focused on adult, nontraditional students who want a leg up in the job market. Some of those academic programs have been developed in collaboration with specific industry partners, where an employer’s endorsement of the credential can lead to a graduate employee getting a promotion.

    Other colleges' forays into competency-based education have been in disciplines with professional licensing and a heavy dose of task-based learning, which seems like an easier fit with academic programs based on mastery rather than time in a classroom.

    That seems particularly true for research universities. For example, the University of Michigan’s first competency-based degree is a master’s of health professions education. And the University of Texas System began with a bachelor’s in biomedical science.

    The toughest nut to crack for competency-based education appears to be bachelor’s degrees aimed at traditional-age students. But that’s what Purdue University is doing with a newly approved bachelor’s in transdisciplinary studies in technology. And the customizable, competency-based degree from the new Purdue Polytechnic Institute combines technical disciplines with the humanities.

    Purdue’s personalized, interdisciplinary approach is a promising one, said Charla Long, executive director of the Competency-Based Education Network, a relatively new group of colleges and universities.

    “Competencies can be developed outside your discipline,” she said, “and be as relevant to your discipline.”

    Purdue also is less overtly focused on job training -- or at least on graduates’ first jobs -- than some might expect with a competency-based degree. In fact, the university's approach sounds like an experimental form of liberal arts education.

    “It’s about preparing students for life,” said Jeff Evans, interim associate dean for undergraduate programs at Purdue, who adds that graduates of the program “will be ready to adapt to this fast-changing world.”

    The public university began working on the new competency-based degree program in 2014. Mitch Daniels, Purdue’s president and Indiana's former governor, previously created the Purdue Polytechnic Institute, which has been tasked with working on transformational forms of undergraduate education. The institute, which is located at eight branch locations as well as Purdue's main campus, won a university-sponsored contest with its idea for the new competency-based degree.

    Customization is a big part of the degree’s novelty.

    Incoming students will be able to work one-on-one with a faculty mentor to create personalized plans of study, Purdue said, which will blend technology-focused disciplines such as computing, construction management, engineering, and aviation with social sciences, the humanities and business.

    “We’re trying to connect the passion of the students with their journey of learning,” said Evans.

    Continued in article

    Bob Jensen/s threads on competency-based testing and degrees ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

     


    In Norway, she said, "universities exchange papers for grading." Objectivity is compromised by mere humanity. Educators who engage personally with their students are psychologically vulnerable to bias in grading.
    Kathleen Tarr --- http://chronicle.com/article/A-Little-More-Every-Day-/233303/?cid=at&utm_source=at&utm_medium=en

    USA Department of Education:  Guidance on Competency-Based Education
    Inside Higher Ed, September 23, 2015 ---
    https://www.insidehighered.com/quicktakes/2015/09/23/guidance-competency-based-education?utm_source=Inside+Higher+Ed&utm_campaign=3d26811214-DNU20150923&utm_medium=email&utm_term=0_1fcbc04421-3d26811214-197565045

    The U.S. Department Education said Tuesday it is poised to release an extensive reference guide for institutions that are participating in an experiment on competency-based education. Since that project was begun last year, the department said it became clear that more guidance was needed -- for both colleges and accrediting agencies.

    The department has yet to release the document publicly, but plans to post it at this link --- https://experimentalsites.ed.gov/exp/guidance.html

    We believe that this guide will offer tremendous support for both experienced and new competency-based education providers as they implement this experiment,” Ted Mitchell, the under secretary of education, said in a written statement. “We recognize that many of you were anticipating that the guide would be released earlier this summer, but it was very important for us to have a high level of confidence that the guidance it contains is on very firm ground.”


    Competency-Based Learning --- https://en.wikipedia.org/wiki/Competency-based_learning

    "Measuring Competency," by Paul Fain, Inside Higher Ed, November 25, 2015 ---
    https://www.insidehighered.com/news/2015/11/25/early-glimpse-student-achievement-college-america-competency-based-degree-provider?utm_source=Inside+Higher+Ed&utm_campaign=389f6fe14e-DNU20151125&utm_medium=email&utm_term=0_1fcbc04421-389f6fe14e-197565045

    Southern New Hampshire U's College for America releases a promising early snapshot of the general-education learning and skills of students who are enrolled in a new form of competency-based education.

    A preliminary snapshot of the academic skills of students who are enrolled in a new, aggressive form of competency-based education is out, and the results look good.

    Southern New Hampshire University used an outside testing firm to assess the learning and skills in areas typically stressed in general education that were achieved by a small group of students who are halfway through an associate degree program at the university’s College for America, which offers online, self-paced, competency-based degrees that do not feature formal instruction and are completely untethered from the credit-hour standard.

    The university was the first to get approval from the U.S. Department of Education and a regional accreditor for its direct-assessment degrees. A handful of other institutions have since followed suit. College for America currently enrolls about 3,000 students, most of whom are working adults. It offers associate degrees -- mostly in general studies with a concentration in business -- bachelor’s degrees and undergraduate certificates.

    To try to kick the tires in a public way, College for America used the Proficiency Profile from the Educational Testing Service. The relatively new test assesses students in core skill areas of critical thinking, reading, writing and mathematics. It also gives “context-based” subscores on student achievement in the humanities, social sciences and natural sciences. The results could be notable because skeptics of competency-based education fear the model might not result in adequate learning in these areas.

    Continued in article


    "How a 40-Year-Old Idea Became Higher Education’s Next Big Thing," by Dan Barrett, Chronicle of Higher Education, October 28, 2015 ---
    http://chronicle.com/article/How-a-40-Year-Old-Idea-Became/233976

    . . .

    These pressures are intersecting with another mounting concern: educational quality. Together, these forces are feeding an unusual bipartisan consensus, and they are prompting higher-education leaders to take a fresh look at an old idea: competency-based education. It allows students to make progress at their own pace by demonstrating what they know and can do instead of hewing to the timeline of the semester. While this model has long been used to expand access and lower costs, particularly for adult students, it is now attracting attention as a way to shore up academic rigor.

    But this surge in interest has also sparked questions. How effective a method is it for students with varying levels of preparedness, or is it really only suited for the academically talented who can learn on their own? Can it assure educational quality, or is it just being offered to the disadvantaged as a cut-rate version of the full college experience?

    The story of how competency-based education has become the latest Next Big Thing after being around for four decades is a tale of timing, of money and politics, and of shifting academic norms.

    Advocates for competency-based learning have seen Big Things get hyped in the past, only to flame out. Still, they hope that this model of learning can ultimately achieve a grand goal: staking a claim to, defining, and substantiating quality in higher education.

    Just maybe, the new stage of development that Mr. Jessup envisioned decades ago may finally be arriving.

    A generation or two after Mr. Jessup’s prediction, a different sort of challenge confronted higher education. The end of the Vietnam War and broadening opportunities for women meant that adults who were older than the core demographic of 18- to 21-year-olds were flocking to college. But with jobs and families, they did not have the luxury of spending hours each week in a classroom.

    Competency-based education as a concept began in that era, the 1970s, with programs emerging to serve those older students. Places like Excelsior College (then Regents College), Thomas Edison State College, DePaul University’s School for New Learning, and the State University of New York’s Empire State College were among the first to offer such programs. They wanted to expand access.

    Then, as state support for higher education dropped and tuition and student-loan debt rose, so did concerns about cost.

    Those two goals, access and cost, have dominated years of efforts to remake higher education. Now, a third goal — educational quality — is driving change.

    Competency-based learning may be able to achieve all three goals, say its supporters. And, they add, it is quality that matters most. "Its potential is for a much higher level of quality and a greater attention to rigor," says Alison Kadlec, senior vice president of Public Agenda, a nonprofit organization that is playing a leading role in the growth of this model.

    "The worst possible outcome," she said, "would be that competency-based education becomes a subprime form of learning."

    Continued in article

     


    At Texas A&M
    "New Graduates Test the Promise of Competency-Based Education," by Dan Berritt, Chronicle of Higher Education, May 21, 2015 ---
    http://chronicle.com/article/New-Graduates-Test-the-Promise/230315/?cid=at

    . . .

    Same Rigor, Different Method

    The Commerce campus created its program in response to a directive by Rick Perry, then the governor of Texas, for universities to develop bachelor's degree programs that would cost students $10,000.

    Led by the Texas Higher Education Coordinating Board, faculty members and administrators at Commerce collaborated with their peers at South Texas College, analyzing labor-force projections and interviewing local employers. The data suggested that the state would see growing demand for midlevel managers with bachelor’s degrees in manufacturing and the service industry. So the professors and administrators designed a bachelor of applied arts and sciences in organizational leadership, with a largely standardized series of courses and a competency-based model. The development phase attracted money from the Bill & Melinda Gates Foundation and Educause, and the program is now delivered in hybrid form, in person and online, at South Texas and entirely online through Commerce.

    Students pay $750 for a seven-week term, during which they complete as many "competencies" as they can. That means mastering skills like problem-solving and applied research as demonstrated on written assignments or video presentations. The competencies are woven into courses for the major as well as general-education requirements.

    The biggest stumbling block for faculty members was terminology, said Ricky F. Dobbs, a professor of history at Commerce and dean of its University College.

    "You can make the word ‘competency’ mean just about anything," he said. As part of a team of faculty members and administrators that was creating the program, Mr. Dobbs and his colleagues used learning outcomes defined by the Association of American Colleges and Universities to develop a set of broad competencies in areas like change management, organizational behavior, and information literacy. The group of instructors across campuses arrived at a common understanding: Their task was to think about how their various disciplines helped students develop skills.

    To use quantitative data to make decisions, for example, students must read a paper on data analysis in government and watch a video on big data in corporations. On discussion boards, the students answer questions about the material and respond to their peers.

    To finish off that particular competency, students write at least 250 words describing the utility of statistics, offering three examples of how the field "makes a difference in all our lives, all the time." Incorporating personal examples, they must explain how translating data into information can help decision-making.

    The program design is not well suited to traditional-age students, Mr. Dobbs said, because those enrolled must complete assignments largely on their own, often applying material they’ve learned in the workplace. "It’s the same rigor," he said. "It’s simply a different method of presenting it to a different population."

    New Perspectives

    Among the new graduates, several found the experience academically challenging, even occasionally overwhelming.

    R. Michael Hurbrough Sr. said that it was one of the most difficult efforts he’d undertaken, and that he often felt like abandoning it. But he stuck with it, crediting help from Commerce faculty.

    Continued in article

     

    Jensen Comment
    There are controversies that guardhouse lawyers in the academy will raise (follow the comments at the end of this article as they unfold). Firstly, we might challenge the phrase "same rigor." In competency-based examinations there may well be more rigor in terms of technical detail and grading (recall how Coursera flunked almost everybody in a computer science course at San Jose State). But there is much less rigor in terms of class participation such as participation in class analysis of comprehensive cases such as those that are central to the Harvard Business Schools and literally all onsite law schools.

     

    Secondly there are barriers to entry for some professions. To sit for the CPA examination degrees are not necessary but students must complete 150 hours of college credit in universities allowed by state boards of accountancy. Most state boards also have requirements as to the courses that must be passed in selected areas of accounting, business, information systems, and business law. If you must have approved 150 hours of credit why not get a masters degree like most students who now sit for the CPA examination?

     

    I'm convinced that the day will come when a student's transcript will have college degrees replaced by scores of badges of accomplishment in terms of course credits and competency-based badges areas where no courses were taken for credit (such as MOOC courses). But we are a long way off before professions will accept these types of transcripts.

    Badges and certifications will probably replace college diplomas in terms for both landing jobs and obtaining promotions in the future.

    But not all badges and certifications are created equally. The best ones will be those that have both tough prerequisites and tough grading and tough experience requirements. There's precedence for the value of certifications in medical schools. The MD degree is now only a prerequisite for such valuable certifications in ophthalmology, orthopedics, neurology cardio-vascular surgery, etc.

    What will be interesting is to see how long it will take badges/certifications to replace Ph.D. degrees for landing faculty jobs in higher education. At present specializations are sort of ad hoc without competency-based testing. For example, accounting professors can advance to specialties like auditing and tax corporate tax accounting with self-study and no competency-based testing. This may change in the future (tremble, tremble).

     

    Watch the video at
    https://www.youtube.com/watch?v=5gU3FjxY2uQ
    The introductory screen on the above video reads as follows (my comments are in parentheses)

    In Year 2020 most colleges and universities no longer exist (not true since residential colleges provide so much more than formal education)

     

    Academia no longer the gatekeeper of education (probably so but not by Year 2020)

     

    Tuition is an obsolete concept (a misleading prediction since badges will not be free in the USA that already has  $100 trillion in unfunded entitlements)

     

    Degrees are irrelevant (yeah, one-size-fits-all diplomas are pretty much dead already)

     

    What happened to education?

     

    What happened to Epic?


    Competency-Based Learning --- http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    "If B.A.’s Can’t Lead Graduates to Jobs, Can Badges Do the Trick?" by Goldie Blumenstyk, Chronicle of Higher Education, March 2, 2015 ---
    http://chronicle.com/article/If-BA-s-Can-t-Lead/228073/?cid=at&utm_source=at&utm_medium=en

    Employers say they are sick of encountering new college graduates who lack job skills. And colleges are sick of hearing that their young alumni aren’t employable.

    Could a new experiment to design employer-approved "badges" leave everyone a little less frustrated?

    Employers and a diverse set of more than a half-dozen universities in the Washington area are about to find out, through a project that they hope will become a national model for workplace badges.

    The effort builds on the burgeoning national movement for badges and other forms of "micro­credentials." It also pricks at much broader questions about the purpose and value of a college degree in an era when nearly nine out of 10 students say their top reason for going to college is to get a good job.

    The "21st Century Skills Badging Challenge" kicks off with a meeting on Thursday. For the next nine months, teams from the universities, along with employers and outside experts, will try to pinpoint the elements that underlie skills like leadership, effective storytelling, and the entrepreneurial mind-set. They’ll then try to find ways to assess students’ proficiency in those elements and identify outside organizations to validate those skills with badges that carry weight with employers.

    The badges are meant to incorporate the traits most sought by employers, often referred to as "the four C’s": critical thinking, communication, creativity, and collaboration.

    "We want this to become currency on the job market," says Kathleen deLaski, founder of the Education Design Lab, a nonprofit consulting organization that is coordinating the project.

    No organizations have yet been selected or agreed to provide validations. But design-challenge participants say there’s a clear vision: Perhaps an organization like TED issues a badge in storytelling. Or a company like Pixar, or IDEO, the design and consulting firm, offers a badge in creativity.

    If those badges gain national acceptance, Ms. deLaski says, they could bring more employment opportunities to students at non-elite colleges, which rarely attract the same attention from recruiters as the Ivies, other selective private colleges, or public flagships. "I’m most excited about it as an access tool," she says.

    ‘Celebrating’ and ‘Translating’

    The very idea of badges may suggest that the college degree itself isn’t so valuable—at least not to employers.

    Badge backers prefer a different perspective. They say there’s room for both badges and degrees. And if anything, the changing job market demands both.

    Through their diplomas and transcripts, "students try to signal, and they have the means to signal, their academic accomplishments," says Angel Cabrera, president of George Mason University, which is involved in the project. "They just don’t have the same alternative for the other skills that employers say they want."

    Nor is the badging effort a step toward vocationalizing the college degree, participants say. As Ms. deLaski puts it: "It’s celebrating what you learn in the academic setting and translating it for the work force."

    Yet as she and others acknowledge, badges by themselves won’t necessarily satisfy employers who now think graduates don’t cut it.

    That’s clear from how employer organizations that may work on the project regard badges. "We’re presuming that there is an additional skill set that needs to be taught," says Michael Caplin, president of the Tysons Partnership, a Northern Virginia economic-development organization. "It’s not just a packaging issue."

    In other words, while a move toward badges could require colleges to rethink what they teach, it would certainly cause them to re-examine how they teach it. At least some university partners in the badging venture say they’re on board with that.

    "Some of what we should be doing is reimagining some disciplinary content," says Randall Bass, vice provost for education at Georgetown University, another participant in the project.

    Mr. Bass, who also oversees the "Designing the Future(s) of the University" project at Georgetown, says many smart curricular changes that are worth pursuing, no matter what, could also lend themselves to the goals of the badging effort. (At the master’s-degree level, for example, Georgetown has already begun offering a one-credit courses in grant writing.)

    "We should make academic work more like work," with team-based approaches, peer learning, and iterative exercises, he says. "People would be ready for the work force as well as getting an engagement with intellectual ideas."

    Employers’ gripes about recent college graduates are often hard to pin down. "It depends on who’s doing the whining," Mr. Bass quips. (The critique he does eventually summarize—that employers feel "they’re not getting students who are used to working"—is a common one.)

    Where Graduates Fall Short

    So one of the first challenges for the badging exercise is to better understand exactly what employers want and whether colleges are able to provide it—or whether they’re already doing so.

    After all, notes Mr. Bass, many believe that colleges should produce job-ready graduates simply by teaching students to be agile thinkers who can adapt if their existing careers disappear. "That’s why I think ‘employers complain, dot dot dot,’ needs to be parsed," he says.

    Mr. Caplin says his organization plans to poll its members to better understand where they see college graduates as falling short.

    Continued in article

    MOOCs --- http://en.wikipedia.org/wiki/MOOCs

    Coursera --- http://en.wikipedia.org/wiki/Coursera

    Coursera /kɔərsˈɛrə/ is a for-profit educational technology company founded by computer science professors Andrew Ng and Daphne Koller from Stanford University that offers massive open online courses (MOOCs). Coursera works with universities to make some of their courses available online, and offers courses in physics, engineering, humanities, medicine, biology, social sciences, mathematics, business, computer science, and other subjects. Coursera has an official mobile app for iOS and Android. As of October 2014, Coursera has 10 million users in 839 courses from 114 institutions.

    Continued in article

    Jensen Comment
    Note that by definition MOOCs are free courses generally served up by prestigious or other highly respected universities that usually serve up videos of live courses on campus to the world in general.  MOOC leaders in this regard have been MIT, Stanford, Harvard, Penn, and other prestigious universities with tens of billions of dollars invested in endowments that give these wealthy universities financial flexibility in developing new ways to serve the public.

    When students seek some type of transcript "credits" for MOOCs the "credits" are usually not free since these entail some types of competency hurdles such as examinations or, at a minimum, proof of participation. The "credits" are not usually granted by the universities like Stanford providing the MOOCs. Instead credits, certificates, badges or whatever are provided by private sector companies like Coursera, Udacity, etc.

    Sometimes Coursera contracts with a college wanting to give its students credits for taking another university's MOOC such as the now infamous instance when more than half of San Jose State University students in a particular MOOC course did not pass a Coursera-administered final examination.
    "What Are MOOCs Good For? Online courses may not be changing colleges as their boosters claimed they would, but they can prove valuable in surprising ways," by Justin Pope, MIT's Technology Review, December 15, 2014 ---
    http://www.technologyreview.com/review/533406/what-are-moocs-good-for/?utm_campaign=newsletters&utm_source=newsletter-daily-all&utm_medium=email&utm_content=20141215

    The following describes how a company, Coursera, long involved with the history of MOOCs, is moving toward non-traditional "credits" or "microcredentials" in a business model that it now envisions for itself as a for-profit company. Also note that MOOCs are still free for participants not seeking any type of microcredential.

    And the business model described below probably won't apply to thousands of MOOCs in art, literature, history, etc. It may apply to subsets of business and technology MOOCs, but that alone does not mean the MOOCs are no longer free for students who are not seeking microcredentials. They involve payments for the "microcredentials" awarded for demonstrated competencies. However these will be defined in the future --- not necessarily traditional college transcript credits. A better term might be "badges of competency."  But these will probably be called microcredentials.

    Whether or not these newer types of microcredentials are successful depends a great deal on the job market.
    If employers begin to rely upon them, in addition to an applicant's traditional college transcript, then Coursera's new business model may take off. This makes it essential that Coursera carefully control the academic standards for their newer types of "credits" or "badges."

     

    "Specializations, Specialized," by Carl Straumsheim, Inside Higher Ed, February 12, 2015 ---
    https://www.insidehighered.com/news/2015/02/12/coursera-adds-corporate-partners-massive-open-online-course-sequences

    Massive open online course providers such as Coursera have long pointed to the benefits of the data collected by the platforms, saying it will help colleges and universities understand how students learn online. Now Coursera’s data is telling the company that learners are particularly interested in business administration and technology courses to boost their career prospects -- and that they want to take MOOCs at their own pace.

    As a result, Coursera will this year offer more course sequences, more on-demand content and more partnerships with the private sector.

    Asked if Coursera is closer to identifying a business model, CEO Rick Levin said, “I think we have one. I think this is it.”

    Since its founding in 2012, Coursera has raised millions of dollars in venture capital while searching for a business model. Many questioned if the company's original premise -- open access to the world's top professors -- could lead to profits, but with the introduction of a verified certificate option, Coursera began to make money in 2013. By that October, the company had earned its first million.

    In the latest evolutionary step for its MOOCs, Coursera on Wednesday announced a series of capstone projects developed by its university partners in cooperation with companies such as Instagram, Google and Shazam. The projects will serve as the final challenge for learners enrolled in certain Specializations -- sequences of related courses in topics such as cybersecurity, data mining and entrepreneurship that Coursera introduced last year. (The company initially considered working with Academic Partnerships before both companies created their version of Specializations.)

    The announcement is another investment by Coursera in the belief that adult learners, years removed from formal education, are increasingly seeking microcredentials -- bits of knowledge to update or refresh old skills. Based on the results from the past year, Levin said, interest in such credentials is "palpable." He described bundling courses together into Specializations and charging for a certificate as “the most successful of our product introductions." Compared to when the sequences were offered as individual courses, he said, enrollment has “more than doubled” and the share of learners who pay for the certificate has increased “by a factor of two to four.”

    “I think people see the value of the credential as even more significant if you take a coherent sequence,” Levin said. “The other measure of effectiveness is manifest in what you’re seeing here: company interest in these longer sequences.”

    Specializations generally cost a few hundred dollars to complete, with each individual course in the sequence costing $29 to $49, but Coursera is still searching for the optimal course length. This week, for example, learners in the Fundamentals of Computing Specialization were surprised to find its three courses had been split into six courses, raising the cost of the entire sequence from $196 to $343. Levin called it a glitch, saying learners will pay the price they initially agreed to.

    The partnerships are producing some interesting pairings. In the Specialization created by faculty members at the University of California at San Diego, learners will “design new social experiences” in their capstone project, and the best proposals will receive feedback from Michel "Mike" Krieger, cofounder of Instagram. In the Entrepreneurship Specialization out of the University of Maryland at College Park, select learners will receive an opportunity to interview with the accelerator program 500 Startups.

    As those examples suggest, the benefits of the companies’ involvement mostly apply to top performers, and some are more hypothetical than others. For example, in a capstone project created by Maryland and Vanderbilt University faculty, learners will develop mobile cloud computing applications for a chance to win tablets provided by Google. “The best apps may be considered to be featured in the Google Play Store,” according to a Coursera press release.

    Anne M. Trumbore, director of online learning initiatives at the University of Pennsylvania’s Wharton School, said the capstone projects are an “experiment.” The business school, which will offer a Specialization sequence in business foundations, has partnered with the online marketplace Snapdeal and the music identification app Shazam, two companies either founded or run by Wharton alumni.

    “There’s not a sense of certainty about what the students are going to produce or how the companies are going to use it,” Trumbore said. “Snapdeal and Shazam will look at the top projects graded highest by peers and trained staff. What the companies do after that is really up to them. We have no idea. We’re casting this pebble into the pond.”

    Regardless of the companies' plans, Trumbore said, the business school will waive the application fee for the top 15 learners in the Specialization and provide scholarship money to those that matriculate by going through that pipeline.

    “The data’s great, but the larger incentive for Wharton is to discover who’s out there,” Trumbore said.

    Levin suggested the partnering companies may also be able to use the Specializations as a recruitment tool. “From a company point of view, they like the idea of being involved with educators in their fields,” he said. “More specifically, I think some of the companies are actually hoping that by acknowledging high-performing students in a couple of these capstone projects they can spot potential talent in different areas of the world.”

    While Coursera rolled out its first Specializations last year, Levin said, it also rewrote the code powering the platform to be able to offer more self-paced, on-demand courses. Its MOOCs had until last fall followed a cohort model, which Levin said could be “frustrating” to learners when they came across an interesting MOOC but were unable to enroll. After Coursera piloted an on-demand delivery method last fall, the total number of such courses has now reached 47. Later this year, there will be “several hundred,” he said.

    “Having the courses self-paced means learners have a much higher likelihood of finishing,” Levin said. “The idea is to advantage learners by giving them more flexibility.”

    Some MOOC instructors would rather have rigidity than flexibility, however. Levin said some faculty members have expressed skepticism about offering on-demand courses, preferring the tighter schedule of a cohort-based model.

    Whether it comes to paid Specializations versus free individual courses or on-demand versus cohort-based course delivery, Levin said, Coursera can support both. “Will we develop more Specializations? Yes. Will we depreciate single courses? No,” he said. “We don’t want to discourage the wider adoption of MOOCs.”

    Continued in article

    Bob Jensen's threads on MOOCs are at
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI 


    Beyond the Essay: Making Student Thinking Visible in the Humanities (a brainstorming project on teaching critical thinking) ---
    http://cft.vanderbilt.edu/guides-sub-pages/beyond-the-essay/
    Bob Jensen's threads on critical thinking and why it's so hard to teach ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#CriticalThinking
    Also see
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    "Beyond Critical Thinking," by Michael S. Roth, Chronicle of Higher Education's Chronicle Review, January 3, 2010 ---
    http://chronicle.com/article/Beyond-Critical-Thinking/63288/

    Learn Psychology --- http://www.learnpsychology.org/


    This is a very good article on the major issues of competency-based assessment of learning
    "Performance-Based Assessment," by  Steven Mintz, Inside Higher Ed, April 29, 2015 ---
    https://www.insidehighered.com/blogs/higher-ed-beta/performance-based-assessment

    . . .

    In contrast, classroom discussions, debates, and case studies tend to emphasize analysis, synthesis, and evaluation. Students are typically asked to offer a critique or assessment, identify bias, present a judgment, or advance a novel interpretation.

    Performance-based assessment offers a valuable alternative (or supplement) to the standard forms of student evaluation. Performance-based assessment requires students to solve a real-world problem or to create perform, or produce something with real-world application. It allows an instructor to assess how well students are able to use essential skills and knowledge, think critically and analytically, or develop a project.  It also offers a measure of the depth and breadth of a student’s proficiencies.

    Performance-based assessment can, in certain instances, simply be an example of what Bloom’s Taxonomy calls application. Thus, a student or a team might be asked to apply knowledge and skills to a particular task or problem. 

    But performance-based assessment can move beyond Bloom’s Taxonomy when students are engaged in a project that requires them to display creativity and that results in an outcome, project, or performance that is genuinely new. The more sophisticated performance assessments involve research, planning, design, development, implementation, presentation, and, in the case of team-based projects, collaboration.  

    If performance-based assessments are to be fair, valid, and reliable, it is essential that there is an explicit rubric that lays out the criteria for evaluation in advance. It is also helpful to ask students to keep a log or journal to document the project’s development and record their reflections on the developmental process.

    The most commonly used assessments – the midterm and final or the term paper – have an unpleasant consequence. Reliance on a small number of high stakes assessments encourages too many students to coast through the semester and to pull all-nighters when their grade is on the line. This may inadvertently encourage a party culture.

    In stark contrast, performance-based assessment offers a way to ensure that evaluation is truly a learning experience, one that engages students and that measures the full range of their knowledge and proficiencies.

    Steven Mintz is Executive Director of the University of Texas System's Institute for Transformational Learning and Professor of History at the University of Texas at Austin. Harvard University Press will publish his latest book, The Prime of Life: A History of Modern Adulthood, next month.

     


    Arizona State's Freshman Year MOOCs Open to All With Final Examinations for Inexpensive Credits

    "Arizona State and edX Will Offer an Online Freshman Year, Open to All," by Charles Huckabee, Chronicle of Higher Education, April 23, 2015 ---
    http://chronicle.com/blogs/ticker/arizona-state-and-edx-will-offer-an-online-freshman-year-open-to-all/97685?cid=wc&utm_source=wc&utm_medium=en

    Arizona State University is joining with the MOOC provider edX in a project that it says “reimagines the freshman year” and opens a new low-cost, low-risk path to a college degree for students anywhere in the world.

    The project, called the Global Freshman Academy, will offer a set of eight courses designed to fulfill the general-education requirements of a freshman year at Arizona State at a fraction of the cost students typically pay, and students can begin taking courses without going through the traditional application process, the university said in a news release on Wednesday. Because the classes are offered as massive open online courses, or MOOCs, there is no limit on how many students can enroll.

    . . .

    The courses to be offered through the Global Freshman Academy are being designed and will be taught by leading scholars at Arizona State. “These courses are developed to their rigorous standards,” Adrian Sannier, chief academic officer for EdPlus at ASU, said in the release. “Course faculty are committed to ensuring their students understand college-level material so that they can be prepared to successfully complete college.”

    Students who pass a final examination in a course will have the option of paying a fee of no more than $200 per credit hour to get college credit for it.

    Mr. Agarwal and Mr. Crow are scheduled to formally announce the project at a conference in Washington on Thursday.

     

    Jensen Comments and Questions
    The real test is how well these credits are accepted by other universities for transfer credit. It probably will not be an issue for graduate school admission since there are three more years of more traditional onsite or online credits. But it could be a huge issue for example when a student takes the first year of ASU MOOC credits and then tries to have these credits accepted by other universities (such as TCU) that still resist accepting any online courses for transfer credit.

    Question
    What are the main differences between MOOC online credits and traditional online credits such as those documented at the following site?
    http://faculty.trinity.edu/rjensen/CrossBorder.htm

    For example, at many universities these days there are multiple sections of a course where some sections are onsite and some are online. Often they are taught by the same instructor. The online sections are usually as small or even smaller than the onsite sections because online instructors often have more student interactions such as in instant messaging not available to onsite students ---
    http://en.wikipedia.org/wiki/Instant_messaging

    Answer
    These are the following obvious differences between MOOC online credits and traditional online credits.

    The bottom line is that it appears that the ASU freshman year MOOC course credits will be little more than competency-based credits. This will be controversial since many faculty in higher education feel like credits in general education core  courses should  entail class participation, including first-year core courses. For example, at Trinity University there is a first-year seminar that all new students take in very small classes that require a lot of class participation in discussions of assigned readings and the writing of term papers. I think some sections of this seminar don't even have examinations. I did not have examinations when I taught a section of this seminar for two years.

    In traditional large lectures courses on campus students typically are broken out into accompanying recitation sections intended for class participation and interactions with a recitation instructor.

    Jensen Note
    I never anticipated competency-based credits in the first-year of college. I think these will be wildly popular in advance-level training courses such as a CPA examination review course in the final (fifth) year of an accounting program. Using competency-based courses for first-year general education courses is more controversial.


    Competency-Based Degrees Without Course Requirements
    "U.S. Approval for Wisconsin Competency-Based Program," Inside Higher Ed, September 3, 2014 ---
    https://www.insidehighered.com/quicktakes/2014/09/03/us-approval-wisconsin-competency-based-program

    Jensen Comment
    There are somewhat similar options at other universities like the University of Akron, Southern New Hampshire, and Capella.

    We seem to have come full circle from the 19th Century when the University of Chicago gave course credits for passing final examinations even if students did not attend classes.

    "Capella Gets Federal Approval for Competency-Based Degrees," Inside Higher Ed,  August 13, 2013 ---
    http://www.insidehighered.com/quicktakes/2013/08/13/capella-gets-federal-approval-competency-based-degrees

    The University of Northern Arizona Offers a Dual Transcript Option, One of Which is Competency-Based
    "Competency-Based Transcripts," by Paul Fain, Inside Higher Ed, August 9, 2013 ---
    http://www.insidehighered.com/news/2013/08/09/northern-arizona-universitys-new-competency-based-degrees-and-transcripts

    Jensen Comment
    This program at Northern Arizona differs from the competency-based programs at the University of Wisconsin, the University of Akron, Capella University, and Southern New Hampshire University in that students at Northern Arizona must sign up for online courses at Northern Arizona before becoming eligible for the competency-based transcript. It differs from Western Governors University in that there are two transcripts rather than just a competency-based transcript for online courses.

    Capella may have a more difficult time getting employers and graduate schools to accept Capella's competency-based  transcript credit in general relative to the University of Wisconsin, the University of Akron, and Southern New Hampshire University. Time will tell. Much depends upon other criteria such as SAT scores, GRE scores, GMAT scores, LSAT scores, MCAT scores, and professional licensing examination scores.


    December 19. 2014 Department of Education Letter
    Q&A Regarding Competency-Based College Credits
    (and merit badges of competence)
    http://ifap.ed.gov/dpcletters/GEN1423.html

    Bob Jensen's threads on competency-based education.
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge
    Note that there are two very different types of programs --- those that require courses versus those that require no courses. For example, Western Governors University requires course credits where distance education course instructors do not assign grades in a traditional manner. Instead grading is based on competency-based performance examinations are required.

    At the other extreme a few universities like the University of Wisconsin now have selected programs where students can earn college credits based upon competency-examination scores without course sign ups. These programs are considered the first steps toward what is increasingly known as a transcript of merit badges that may eventually replace traditional degree programs such as masters degrees in the professions such as medical professions.

    In a sense residency programs in medical schools are already have "merit badges" based upon upon experience and competency (licensing) examinations to become ophthalmologists, cardiologists, urologists, neurologists, etc.

    Video:  A Scenario of Higher Education in 2020

    November 14, 2014 message from Denny Beresford

    Bob,

    The link below is to a very interesting video on the future of higher education – if you haven’t seen it already. I think it’s very consistent with much of what you’ve been saying.

    Denny

    http://www.youtube.com/watch?v=5gU3FjxY2uQ

    November 15, 2014 reply from Bob Jensen

    Hi Denny,

    Thank you for this link. I agree with many parts of this possible scenario, and viewers should patiently watch it through the Google Epic in 2020.

    But this is only one of many possible scenarios, and I definitely do not agree with the predicted timings. None of the predictions for the future will happen in such a short time frame.

    It takes a long time for this video to mention the role of colleges as a buffer between living as a protected kid at home and working full time on the mean streets of life. And I don't think campus living and learning in the future will just be for the "wealthy." We're moving toward a time when campus living will be available more and more to gifted non-wealthy students. But we're also moving toward a time when campus living and learning may be available to a smaller percentage of students --- more like Germany where campus education is free, but only the top 25% of the high school graduates are allowed to go to college. The other 75% will rely more and more on distance education and apprenticeship training alternatives.

    Last night (November 14) there was a fascinating module on CBS News about a former top NFL lineman (center) for the Rams who in the prime of his career just quit and bought a 1,000 acre farm in North Carolina using the millions of dollars he'd saved until then by playing football.

    What was remarkable is that he knew zero about farming until he started learning about it on YouTube. Now he's a successful farmer who gives over 20% of his harvest to food banks for the poor.

    This morning I did a brief search and discovered that there are tons of free videos on the technical aspect of farming just as there are tons of videos that I already knew about on how to be a financial analyst trading in derivative financial instruments.

    My point is that there will be more and more people who are being educated and trained along the lines of the video in your email message to me.
     http://www.youtube.com/watch?v=5gU3FjxY2uQ 
    The education and training will be a lifelong process because there is so much that will be available totally free of charge. We will become more and more like Boy-Girl Scouts earning our badges.

    College degrees will be less and less important as the certification badges (competency achievements) mentioned in the video take over as chevrons of expertise and accomplishment. Some badges will be for hobbies, and some badges will be for career advancement.

    These are exciting times for education and training. We will become more and more like the Phantom of the Library at Texas A&M without having to live inside a library. This "Phantom" Aggie was a former student who started secretly living and learning in the campus library. Now the world's free "library" is only a few clicks away --- starting with Wikipedia and YouTube and moving on to the thousands of MOOCs now available from prestigious universities ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI 

    Also see the new-world library alternatives at
    http://faculty.trinity.edu/rjensen/bookbob2.htm

    Thanks Denny

    Bob


    "Looking Into Competency-Based Education," Inside Higher Education, January 26, 2015 ---
    https://www.insidehighered.com/quicktakes/2015/01/26/looking-competency-based-education

    A growing number of colleges are offering competency-based degrees, and the emerging form of higher education has caught the attention of state and federal policy makers. Yet few researchers have taken an in-depth look at the range of competency-based programs. A new paper from the American Enterprise Institute's Center on Higher Education Reform tries to change this.

    The paper by Robert Kelchen, an assistant professor of education at Seton Hall University, is the first in a series that will seek to "explore the uncharted landscape." Kelchen concludes that competency-based education has the potential to "streamline the path to a college degree for a significant number of students." Yet many questions remain about who is currently enrolled in these programs, he wrote, or how the degree tracks are priced.


    "Competency, Texas-Style November 6, 2014," By Paul Fain, Inside Higher Ed, November 6, 2014 ---
    https://www.insidehighered.com/news/2014/11/06/competency-based-health-profession-credentials-university-texas-system

    The University of Texas System plans to make its first foray into competency-based education fittingly far-reaching.

    The system’s forthcoming “personalized” credentials will be limited to the medical sciences, for now. But the new, competency-based curriculum will involve multiple institutions around the state, system officials said, with a track that eventually will stretch from high school, or even middle school, all the way to medical school.

    Many details still need to be hashed out about the project, which the system announced this week. But several key elements are in place.

    Continued in article

    Jensen Comment
    Competency-based college credits are now widely available from both non-profit and for-profit universities. However, the programs are very restricted to certain disciplines, often graduate studies. In Western Canada, for example, the Chartered Accountancy School of Business (CASB) has offered a competency-based masters degree for years. However, students do enroll in courses and have extensive internships on the job ---
    http://www.casb.com/

    Bob Jensen's threads on competency-based college credits ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

     


    College Credits Without Courses
    "Managing Competency-Based Learning," by Carl Straumsheim, Inside Higher Ed, September 29, 2014 ---
    https://www.insidehighered.com/news/2014/09/29/college-america-spins-its-custom-made-learning-management-system

    Southern New Hampshire University, seeing an opening in the market for a learning management system designed around competency-based education, is spinning off the custom-made system it built to support College for America.

    Before College for America launched in January 2013, the university considered building a platform to support the competency-based education subsidiary on top of the learning management system used on campus, Blackboard Learn. The university instead picked Canvas, created by Instructure, but after only a couple of months, “we decided we needed to build our own,” said Paul J. LeBlanc, president of the university.

    For most colleges and universities, any one of the major learning management systems on the market will likely meet their requirements for posting course content and engaging with students outside the classroom. But for institutions that don’t tie academic progress to the course or the credit hour -- or have an unconventional method of delivering education -- those same systems may be restrictive.

    “We speak of the world of LMSes as a world that’s designed around content delivery, course delivery and the mechanics of running a course,” LeBlanc said. “It’s very course-centric, so we built our program on the basis of our relationship with our students.”

    LeBlanc and College for America are calling it a “learning relationship management system,” a composite term to describe a learning management system build on top of Salesforce, the popular customer relationship management software. LeBlanc said the system aims to strike a balance between “lots of things that CIOs love” -- such as software as a service and cloud hosting -- with “what educators love.”

    For students, the system looks more like a social network than a learning management system. When they log in, students are greeted by an activity feed, showing them a tabbed view of their current projects, goals and feedback. A column on the right side of the screen lists connections and to-dos, and a bar along the top tracks progress toward mastering competencies.

    Behind the scenes, faculty members and administrators are treated to a stream of data about everything students do inside the system, from when they submitted their paperwork and their statement of purpose to the surveys they have answered and the time spent talking to academic coaches.

    “I think this next generation of systems is really going to be about data and analytics and relationship management,” LeBlanc said. “The whole shift in conversation, it seems to me, is about student-centeredness.”

    On Oct. 1, one year after the system went live at College for America, the university is spinning it off as Motivis Learning and writing the for-profit subsidiary a $7 million check. In its first phase, LeBlanc said, the company will further develop its platform based on how other institutions are approaching competency-based learning.

    One of Motivis’s early design partners, the University of Central Missouri, hopes to use system to cut down on administrative overlap. Its Missouri Innovation Campus program, which gives students an opportunity to earn a bachelor’s degree two years after graduating high school, has in its first year attempted to tie together data from a school district, a community college and a four-year institution with manual spreadsheet work.

    “We’ve likened it to trying to cobble together three different student information systems, three different registrations ..., three student IDs, three admissions portfolios,” said Charles M. (Chuck) Ambrose, president of the university. “What we’re trying to envision is that this LMS will help move us to a superhighway or an Autobahn.”

    The university will also be able to invite local businesses into the system, allowing internship supervisors to log students’ progress instead of filling out a paper form, Ambrose said.

    Central Missouri’s model is one of many Motivis is interested in tweaking its system to support, said Brian Peddle, College for America’s chief technology officer, who will become the company's CEO. One idea, he said, is to produce the common features of any learning management system, then offer “building blocks” to support traditional courses, competency-based learning and other modes of delivery.

    Continued in article

    Bob Jensen's threads on alternative universities that now have competency-based learning alternatives ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    Some like Western Governors University require course enrollments but grade on the basis of competency-based examinations.

    Others like the University of Wisconsin, the University of Akron, and Southern New Hampshire do no require course enrollments.


    Kaplan University --- http://en.wikipedia.org/wiki/Kaplan_University

    "For-Profit Giant Starts Competency-Based ‘Open College’," by Goldie Blumenstyk, Chronicle of Higher Education, October 3, 2014 ---
    http://chronicle.com/article/For-Profit-Giant-Starts/149227/?cid=at&utm_source=at&utm_medium=en

    One of the biggest for-profit college companies in the country is creating an "Open College" aimed at adults who may already have skills and experience that could qualify for college credits.

    The new venture, from Kaplan Higher Education, will include free online services and personalized mentoring to help people identify and organize prior experience and skills that could count toward a degree or move them closer to a new career.

    It will also provide fee-based services, under a subscription model, that will offer ways for students to satisfy the remaining requirements for a bachelor of science degree in professional studies from Kaplan University. Students who enroll in Open College could take courses at Kaplan University or from other sources, such as the MOOC provider edX or the Saylor Foundation, as long as the students ultimately meet the course outcomes set by Open College.

    Kaplan Higher Education, part of the Graham Holdings Company, hopes to begin enrolling its first Open College@KU students on Monday.

    The Kaplan offerings respond to a growing interest in competency-based education and a concern among many higher-education experts about the absence of tools to help people, especially adults, find more economical and efficient pathways to degrees and careers.

    Other ventures, including the movement around "badges," are trying to develop ways to take students’ informally acquired knowledge and "certify it, organize it, and credential it," notes Mark S. Schneider, a vice president at the American Institutes for Research who studies the earnings of college graduates. The Kaplan venture is "touching a need that everybody recognizes," he says, but whether it can actually execute the idea remains to be seen.

    Open College will not participate in federal student-aid programs. But company officials say it will nonetheless offer an "affordable" path to a college degree through its use of assessments that give credit for prior learning and the self-paced program.

    With enrollment subscription costs of $195 a month, charges of $100 per assessment for each of the 35 course equivalents needed to earn credits toward a degree, and a $371-per-credit charge for a final six-credit capstone course, a student entering with no credits who pursued the program for 48 straight months could earn a bachelor’s degree for about $15,000. Students who earned credits based on their prior experience would end up paying less than that.

    Officials expect that such students would typically enroll with about 60 credits, take 24 to 30 months to complete a degree, and pay about $9,500.

    'A Good Algorithm'

    Mr. Schneider says the success of the venture, for Kaplan and for students, depends on the quality of the counseling and the costs of providing it. And that will depend on how much of it is based on online templates or personalized service.

    "Obviously, if you have a good algorithm, then your price is really low," he says. And if Kaplan has that, he says, "more power to them. But if it’s human interaction, how can you do it for $195 a month?"

    Competency-based degrees are not new, even in the for-profit-college sector. Capella University’s year-old FlexPath program, for example, now offers six degrees and enrolls abut 100 graduate and undergraduate students per quarter.

    Peter Smith, president of the new Kaplan venture, says the free features of Open College set it apart. For example, at the nonprofit Western Governors University, he says, "you have to enroll" before you can know where you stand. "They do not help you figure all the stuff out prior to enrollment."

    Mr. Smith is no stranger to higher education. Before joining Kaplan seven years ago, he was founding president of the Community College of Vermont, founding president of California State University-Monterey Bay, and a member of Congress. He says the offerings will help students who have accumulated learning but don’t know "how to turn it into something valuable to themselves."

    The venture is not Kaplan’s first foray into prior-learning assessments. In 2011 the company announced a service it then called KNEXT that would, for a fee, advise students on preparing portfolios that demonstrated their expertise and qualifications and then submitting them for credit evaluation. But that effort didn’t catch on. In fact, Mr. Smith says, only two students outside of Kaplan University used the $1,500 service.

    But within Kaplan University, thousands of students took advantage of a variant of that service in the form of a course. Kaplan Higher Education also created a free online pilot version, called the Learning Recognition Course, that it has been testing for the past year.

    Mr. Smith says students who took the free online course or Kaplan's instructor-led version used it to turn their experiences into something of value: college credit. On average, the 100 or so students who took the online course requested 50 quarter-course credits and were awarded an average of 37. Those at Kaplan sought an average of 36 credits and were awarded 27.

    A Gateway

    Now that the online course will be a gateway to Open College@KU, students can take it at no cost to learn how to develop their expertise into a portfolio. Then, if they later elect to have their experience and skills assessed for credit, they will have several options: find another college willing to evaluate their portfolio for credit; pay NEXT (as KNEXT has since been renamed) to do an assessment for credit; enroll in Kaplan University or its Mount Washington College, which will waive the fees for assessing the credits; or enroll in the new Open College, which will assess the credits as part of the basic subscription price.

    Continued in article

    Jensen Comment

    There are several ways to spot diploma mills.

    I don't think the owner of Kaplan University will let Kaplan University become a diploma mill, although there have been some academic scandals in the past before The Washington Post, that owns Kaplan University, was sold to the billionaire founder of giant online retailer Amazon --- Jeff Bezos. An enormous academic scandal is publicity that I'm sure Bezos will desperately try to avoid. Amazon depends too much on the legitimate academic market.

    The essence of this new Kaplan open-enrollment program is to give credit for "life experience" based upon competency-based testing. As the saying goes --- the Devil is in the details. In this case the details surround the rigor that makes graduates of the program competitive with graduates of respected colleges and universities in the Academy. Only the Ivy League universities  can get away with courses where everybody gets an A grade. The reason is that the admission criteria allow for extreme grade inflation in these prestigious universities. Kaplan University is a long way from the Ivy League.

    Kaplan University is not the only for-profit university with competency-based testing course credits. Before now, the Department of Education approved the competency-based testing programs at Capella University. Similarly, such programs have been approved in non-profit universities like the University of Wisconsin, the University of Akron, and the University of Southern New Hampshire. The Kaplan Program, however, appears to be more personalized in terms of services other than mere administration of competency-based examinations.

    I don't think any of these programs are intended for the dropouts or graduates of ghetto schools in the largest cities of the USA. It's too expensive and complicated to prepare unmotivated students for college who cannot even read properly or do basic arithmetic. The competency-based programs are aimed at highly motivated self-learners at higher levels of competency. For example, such programs might seek out top high school graduates who who dropped out of college along the way for a variety of possible reasons, including unintended parenthood. It might eventually even include college graduates trying to prepare for certain vocations like nursing, pharmacy, or accounting.

    As I said above, the Devil is in the details --- meaning that the Devil is in the competency-based testing rigor.


    "College, on Your Own Competency-based education can help motivated students. But critics say it’s no panacea," by Dan Barrett, Chronicle of Higher Education, July 14, 2014 ---
    http://chronicle.com/article/College-on-Your-Own/147659/?cid=wb&utm_source=wb&utm_medium=en

    Jensen Comment
    Several major universities like the University of Wisconsin and the University of Akron are now providing competency-based testing for college credit. Western Governors University for years is a bit different. It grades on the basis of competency-based testing but also requires that students enroll in courses. Years and years ago the University of Chicago allowed students to take final examinations for credit even though the students were not enrolled in courses. Like it or not we seem to be going full circle.


    Mathematics Assessment: A Video Library --- http://www.learner.org/resources/series31.html


    Western Governors University --- http://en.wikipedia.org/wiki/Western_Governors_University

    "In Boost to Competency Model, Western Governors U. Gets Top Marks in Teacher Ed," by Dan Barrett, Chronicle of Higher Education, June 17, 2014 ---
    http://chronicle.com/article/In-Boost-to-Competency-Model/147179/?cid=at&utm_source=at&utm_medium=en 

     


    "Competency-Based Degrees: Coming Soon to a Campus Near You," by Joel Shapiro, Chronicle of Higher Education, February 17, 2014 ---
    http://chronicle.com/article/Competency-Based-Degrees-/144769/?cid=cr&utm_source=cr&utm_medium=en

    Has distance education significantly affected the business and teaching models of higher education? Certainly. Is it today’s biggest disrupter of the higher-education industry? Not quite. In fact, the greatest risk to traditional higher education as we know it may be posed by competency-based education models.

    Competency-based programs allow students to gain academic credit by demonstrating academic competence through a combination of assessment and documentation of experience. The model is already used by institutions including Western Governors University, Southern New Hampshire University, Excelsior College, and others, and is a recent addition to the University of Wisconsin system.

    Traditional educators often find competency programs alarming—and understandably so. Earning college credit by virtue of life experience runs afoul of classroom experience, which many educators believe to be sacred. As a colleague recently said, "Life is not college. Life is what prepares you for college."

    In fact, traditional educators should be alarmed. If more institutions gravitate toward competency-based models, more and more students will earn degrees from institutions at which they take few courses and perhaps interact minimally with professors. Then what will a college degree mean?

    It may no longer mean that a student has taken predetermined required and elective courses taught by approved faculty members. Rather, it would mean that a student has demonstrated a defined set of proficiencies and mastery of knowledge and content.

    Competency models recognize the value of experiential learning, in which students can develop and hone skill sets in real-world contexts. For instance, a student with a background in web design may be able to provide an institution with a portfolio that demonstrates mastery of computer coding or digital design. If coding or digital design is a discipline in which the institution gives credit, and the mastery demonstrated is sufficiently similar to that achieved in the classroom, then the institution may grant credit based on that portfolio.

    The logic of competency-based credit is compelling. After all, colleges and universities hire most people to teach so that students learn. If students can achieve the desired learning in other ways, then why not provide them with the same credential as those who sat in the traditional classrooms with the traditional faculty members?

    Additionally, the competency-based model, so often cast aside by traditional institutions, already exists within their walls. Not only do many colleges give credit for 
real-world learning through (sometimes mandatory) internships, but a version of the competency model has long been part of traditional assessment practices.

    Most professors grade students on the basis of their performance on particular assignments, such as papers, tests, and projects. If a student’s final paper reflects a sufficient degree of sophistication and mastery, then the professor gives the student a passing grade, thus conferring credit. But how much can the professor really know about how the student learned the material? If the end is achieved, how much do the means matter?

    In primary and secondary education, much is made of measuring students’ growth. A successful teacher moves a student from Point A to Point B. The greater the difference between A and B, arguably, the more effective the teacher. But in higher education, rarely is any effort made to formally assess student growth. Rather, professors typically give grades based on final performance, regardless of students’ starting point. In the classroom, competency models rule, even at traditional institutions.

    The primary weakness of competency models, however, is that they can be only as good as the assessment mechanisms they employ, and, unfortunately, no assessment can be a perfect proxy for deep and meaningful learning. Certainly, great education isn’t just about content. It challenges students to consider others’ viewpoints, provides conflicting information, and forces students to reconcile, set priorities, and choose. In the best cases, it engenders a growth of intellect and curiosity that is not easily definable.

    Higher-end learning remains the defining value proposition of great teaching within a formal classroom setting. But because it is exceedingly hard to assess, it cannot easily be incorporated into competency models.

    Nonetheless, competency models will make significant headway at the growing number of institutions that offer skill-based programs with clearly delineated and easily assessed learning outcomes. They will also appeal to students who want to save time and money by getting credit applied to past experience. Institutions that serve these students will thus find competency models to be a competitive advantage.

    Meanwhile, institutions that are unwilling or unable to incorporate elements of a competency model will be forced to defend the value of learning that cannot be easily assessed and demonstrated. That will be a hard message to communicate and sell, especially given that students with mastery of applied and technical skill sets tend to be rewarded with jobs upon graduation. Additionally, noncompetency tuition will almost certainly rise relative to competency-based credit models, which require less instruction and thus can be delivered at lower cost.

    The marketplace rarely reacts well to perceived low marginal benefit at high marginal price.

    Continued in article

    Bob Jensen's threads on competency-based assessment and assessment of deep understanding:
    Concept Knowledge and Assessment of Deep Understanding ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge


    "The Baloney Detection Kit: Carl Sagan’s Rules for Bullshit-Busting and Critical Thinking," by Maria Popova, Brain Pickings, January 3, 2014 ---
    http://www.brainpickings.org/index.php/2014/01/03/baloney-detection-kit-carl-sagan/

    Carl Sagan was many things — a cosmic sage, voracious reader, hopeless romantic, and brilliant philosopher. But above all, he endures as our era’s greatest patron saint of reason and common sense, a master of the vital balance between skepticism and openness. In The Demon-Haunted World: Science as a Candle in the Dark (public library) — the same indispensable volume that gave us Sagan’s timeless meditation on science and spirituality, published mere months before his death in 1996 — Sagan shares his secret to upholding the rites of reason, even in the face of society’s most shameless untruths and outrageous propaganda.

    In a chapter titled “The Fine Art of Baloney Detection,” Sagan reflects on the many types of deception to which we’re susceptible — from psychics to religious zealotry to paid product endorsements by scientists, which he held in especially low regard, noting that they “betray contempt for the intelligence of their customers” and “introduce an insidious corruption of popular attitudes about scientific objectivity.” (Cue in PBS’s Joe Hanson on how to read science news.) But rather than preaching from the ivory tower of self-righteousness, Sagan approaches the subject from the most vulnerable of places — having just lost both of his parents, he reflects on the all too human allure of promises of supernatural reunions in the afterlife, reminding us that falling for such fictions doesn’t make us stupid or bad people, but simply means that we need to equip ourselves with the right tools against them.

    Continued in article

    Concept Knowledge and Assessment of Deep Understanding ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

     


    "The Degree Is Doomed,"by Michael Staton, Harvard Business Review Blog, January 9, 2014 ---
    http://blogs.hbr.org/2014/01/the-degree-is-doomed/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+harvardbusiness+%28HBR.org%29&cm_ite=DailyAlert-010914+%281%29&cm_lm=sp%3Arjensen%40trinity.edu&cm_ven=Spop-Email

    New Approach to Transfer," by Paul Fain, Inside Higher Ed, January 9, 2013 ---
    http://www.insidehighered.com/news/2014/01/09/wiche-transfer-passport-based-proficiency-rather-credits  


    Competency-Based Programs (where instructors do not assign the grades) Can Work Well But Do Not Always Work Well

    A Research Report
    "Competency-Based Degree Programs in the U.S. Postsecondary Credentials for Measurable Student Learning and Performance," Council on Adult and Experiential Learning," 2012 ---
    http://www.cael.org/pdfs/2012_CompetencyBasedPrograms

    Executive Summary
    As our economy evolves, there is growing recognition of the importance of an educated workforce. A key challenge is how to help more people, particularly adults, succeed at the postsecondary level and earn degrees. However, promoting degree completion is not our only challenge. Today our higher education system is facing a crisis regarding its perceived quality. One model for improving quality is competency-based education, in which an institution clearly defines the specific competencies expected of its graduates. This paper examines the current state of competency-based postsecondary education in the U.S., profiling the various types of competency-based, or competency-focused, models that currently exist, the extent to which these programs assess for student competencies or learning outcomes, and the extent to which these programs operate outside of a credit-based system. These programs can help inform other institutions interested in developing a stronger focus on competencies, whether by demonstrating the possibilities of high quality programs or by facilitating the recognition of learning.

    Jensen Comment
    The good news is that competency-based grades virtually put an end to games played by students to influence their grades from their instructors. Instead they may be more demanding on their instructors to do a better job on content rather than being their buddies. Competency-based grading goes a long way to leveling the playing field.

    However, a competency-based system can be dysfunctional to motivation and self-esteem. One of my old girl friends at the University of Denver was called in by her physical chemistry professor who made a deal with her. If she would change her major from chemistry he agreed to give her a C grade. I honestly think an F grade would've discouraged her to a point where she dropped out of college. Instead she changed to DU's nursing school and flourished with a 3.3 gpa. Purportedly she became an outstanding nurse in a long and very satisfying career that didn't require much aptitude for physical chemistry. For some reason she was better in organic chemistry.

    I can't imagine teaching a case course in the Harvard Business School where the course grades are entirely based on a final examination that depends zero upon what the course instructor feels was "class participation." There's not much incentive to participate in class discussions if the those discussions impact some way upon grades and instructor evaluations (such as evaluations for graduate school and employment).

    Much of what is learned in a course or an entire college curriculum cannot be measured in test grades and term paper grading (where the readers of the term papers are not the instructors).

    In spite of all the worries about competency-based grading and student evaluations, there are circumstances where competency-based education inspires terrrific learning experiences.


    Competency-Based Learning --- http://en.wikipedia.org/wiki/Competency-based_learning

    The University of Northern Arizona Offers a Dual Transcript Option, One of Which is Competency-Based
    "Competency-Based Transcripts," by Paul Fain, Inside Higher Ed, August 9, 2013 ---
    http://www.insidehighered.com/news/2013/08/09/northern-arizona-universitys-new-competency-based-degrees-and-transcripts

    Jensen Comment
    This program differs from the competency-based programs at the University of Wisconsin, the University of Akron, and Southern New Hampshire University in that students must sign up for online courses at Northern Arizona before becoming eligible for the competency-based transcript. It differs from Western Governors University in that there are two transcripts rather than just a competency-based transcript for online courses.

    Bob Jensen's threads on competency-based assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge


    "The Gates Effect The Bill & Melinda Gates Foundation has spent $472-million (so far) on higher education. Why many in academe are not writing thank-you notes," by Marc Parry, Kelly Field, and Beckie Supiano. Chronicle of Higher Education, July 14, 2014 ---
    http://chronicle.com/article/The-Gates-Effect/140323/

    Jensen Comment
    This is a long article filled with more opinion than fact. One suspects that faculty unions had the major impact.

    Obviously, distance education with large or small classes and competency-based examinations are poor choices for the learning challenged and unmotivated learners that need more hand holding and inspiration to learn.

    On the other had, the article assumes ipso facto that traditional colleges are doing a great job educating. The fact of the matter is that the best thing traditional colleges are doing best is inflating grades for lazy students ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation

    The other misleading thing thing about the article is that competency-based testing leads to watered down courses. The fact of the matter is that many traditional teachers would shake in their boots if their grade-inflated pampered students had to take competency based examinations --- which is why students tend do quite poorly on the MCAT competency-based examinations for medical school after getting mostly A grades in their science courses. This is aspiring teachers do so poorly on teacher certification examinations that are hardly rocket science.

    This is mostly a paranoia article patting the status quo in higher education a pat on the back. If Bill Gates wants better reviews in the Chronicle he should simply give the money to the AAUP


    July 19, 2013 message from Glen Gray

    The follow is the lead to an article that appeared in today’s L.A. Times

    “San Jose State University is suspending a highly touted collaboration with online provider Udacity to offer low-cost, for-credit online courses after finding that more than half of the students failed to pass the classes, officials said Thursday.”

    Udacity Experiment at San Jose State Suspended After 56% to 76% of Students Fail Final Exams ---
    http://www.openculture.com/2013/07/udacity-experiment-at-san-jose-state-suspended.html

    Are competency-based MOOCs tougher for students than traditional courses?
    "Udacity Project on 'Pause'," by Ry Rivard. Chronicle of Higher Education,

    San Jose State's experiment with MOOC provider attracted enormous attention when it was launched. But students didn't do as well as they did in traditional classes.

     

    "A University's Offer of Credit for a MOOC Gets No Takers," by Steve Kolowich, Chronicle of Higher Education, July 8, 2013 ---
    http://chronicle.com/article/A-Universitys-Offer-of-Credit/140131/?cid=wc&utm_source=wc&utm_medium=en

    Jensen Comment
    With nationwide median grades being around A- in live classrooms, it may well be that students just fear that the same loose grading standards will not be applied to competency-based grading in a MOOC ---
    http://www.gradeinflation.com/

    Students cannot brown nose a MOOC for a higher grade ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    There may also be problems transferring these MOOC credits to other universities. There are many universities who do not allow transfer credit for distance education courses in general, although this is somewhat hard to enforce when major universities do not distinguish (on transcripts) what sections of courses were taken onsite versus online. In may instances students have a choice as to whether to take onsite sections or online sections of the same course. But when all sections are only available via distance education other universities may deny transfer credits. In accountancy, some state societies of CPAs, such as in Texas, limit the number of distance education courses allowed for permission to take the CPA examination.

    Also it could be that this MOOC alternative just was not publicized enough to reach its potential market.

    Bob Jensen's threads on the controversial history of the OKI and the MOOCs ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

     


    Question
    What is the difference between traditional competency-based course credits and "decoupled" competency-based course credits?

    Answer
    In traditional competency-based systems an instructor either does not assign course grades or does so based solely on examinations that cannot be linked to particular students in a way where knowing a student can affect the final grade. Course grades are generally not influenced by class discussions (onsite or in online chat rooms), homework, term papers, course projects, team performance, etc. In many instances the instructors do not even prepare the examinations that determine competency-based grades.

    Western Governors University --- http://en.wikipedia.org/wiki/Western_Governors_University
    WGU was one of the universities in modern times (since 1997) to offer fully accredited online courses using a competency-based grading system. However, students must participate in WGU and do class assignments for courses before they can take the competency-based examinations.

    Southern New Hampshire University (a private onsite university that is not funded by the State of New Hampshire) ---
    http://en.wikipedia.org/wiki/Southern_New_Hampshire_University

    Capella University --- http://www.capella.edu/

    Kentucky Community and Technical College System --- http://www.kctcs.edu/

    "Credit Without Teaching," by Paul Fain, Inside Higher Ed, April 22, 2013 ---
    http://www.insidehighered.com/news/2013/04/22/competency-based-educations-newest-form-creates-promise-and-questions

    Earlier this year Capella University and the new College for America began enrolling hundreds of students in academic programs without courses, teaching professors, grades, deadlines or credit hour requirements, but with a path to genuine college credit.

    The two institutions are among a growing number that are giving competency-based education a try, including 25 or so nonprofit institutions. Notable examples include Western Governors University and the Kentucky Community and Technical College System.

    These programs are typically online, and allow students to progress at their own pace without formal course material. They can earn credit by successfully completing assessments that prove their mastery in predetermined competencies or tasks -- maybe writing in a business setting or using a spreadsheet to perform calculations.

    College for America and a small pilot program at Capella go a step further than the others, however, by severing any link to the credit hour standard. This approach is called “direct assessment.” Other competency-based programs track learning back to seat time under the credit hour, which assumes one hour of instruction and three hours of coursework per week. (For more details from College for America, click here.)

    Continued in article

    In "decoupled" course credit systems, a university that usually offers competency-based courses where class attendance or online course participation is not required. Students can learn the material from any sources, including free online learning modules, before signing up to take the competency-based examinations. Sometimes more than one "progress" competency-based examination may be required. But no particular course is required before taking any competency-based examination.

    Decoupled systems become a lot like the Uniform CPA Examination where there are multiple parts of the examination that may be passed in stages or passed in one computer-based sitting.

    Southern New Hampshire University (a private onsite university that is not funded by the State of New Hampshire) ---
    http://en.wikipedia.org/wiki/Southern_New_Hampshire_University

    SNHU claims to be the first university to decouple courses from competency-based examinations. However, I'm not certain that his claim is true since the University of Wisconsin System may have been the first to offer some decoupled competency-based degree programs..The University of Akron now has some similar alternatives.

    Wisconsin System's Competency-Based Degrees as of November 28, 2012 ---
    http://www.wisconsin.edu/news/2012/r121128.htm 

    It is expected that students seeking decoupled competency-based credits will sign up for learning modules from various free learning systems.
    Listing of Sites for Free Courses and Learning Modules (unlike certificates, transferrable credits are never free) ---
    http://www.opencolleges.edu.au/informed/features/free-online-courses-50-sites-to-get-educated-for-free/

     

    "Competency-Based Education Advances With U.S. Approval of Program," by Marc Parry, Chronicle of Higher Education, April 18, 2013 --- Click Here
    http://chronicle.com/blogs/wiredcampus/u-s-education-department-gives-a-boost-to-competency-based-education/43439?cid=wc&utm_source=wc&utm_medium=en

    Last month the U.S. Education Department sent a message to colleges: Financial aid may be awarded based on students’ mastery of “competencies” rather than their accumulation of credits. That has major ramifications for institutions hoping to create new education models that don’t revolve around the amount of time that students spend in class.

    Now one of those models has cleared a major hurdle. The Education Department has approved the eligibility of Southern New Hampshire University to receive federal financial aid for students enrolled in a new, self-paced online program called College for America, the private, nonprofit university has announced.

    Southern New Hampshire bills its College for America program as “the first degree program to completely decouple from the credit hour.” Unlike the typical experience in which students advance by completing semester-long, multicredit courses, students in College for America have no courses or traditional professors. These working-adult students make progress toward an associate degree by demonstrating mastery of 120 competencies. Competencies are phrased as “can do” statements, such as “can use logic, reasoning, and analysis to address a business problem” or “can analyze works of art in terms of their historical and cultural contexts.”

    Students show mastery of skills by completing tasks. In one task, for example, students are asked to study potential works of art for a museum exhibit about the changing portrayal of human bodies throughout history. To guide the students, Southern New Hampshire points them to a series of free online resources, such as “Smarthistory” videos presented by Khan Academy. Students must summarize what they’ve found by creating a PowerPoint presentation that could be delivered to a museum director.

    Completed tasks are shipped out for evaluation to a pool of part-time adjunct professors, who quickly assess the work and help students understand what they need to do to improve. Southern New Hampshire also assigns “coaches” to students to help them establish their goals and pace. In addition, the university asks students to pick someone they know as an “accountability partner” who checks in with them and nudges them along.

    Students gain access to the program through their employers. Several companies have set up partnerships with Southern New Hampshire to date, including Anthem Blue Cross Blue Shield and ConAgra Foods.

    The Education Department is grappling with how to promote innovation while preventing financial-aid abuses. Southern New Hampshire, whose $2,500-a-year program was established last year with support from the Bill & Melinda Gates Foundation, has served as a guinea pig in that process. But other institutions are lining up behind it, hoping to obtain financial aid for programs that don’t hinge on credit hours.

    Continued in article

    Jensen Comment
    In many ways this USNH program reduces the costs of student admission and of offering remedial programs to get students up to speed to enroll in USNH courses on campus.

    But there are enormous drawbacks
    In some courses the most important learning comes from student interactions, team projects, and most importantly case discussions. In the Harvard Business School, master case teachers often cannot predict the serendipitous way each class will proceed since the way it proceeds often depends upon comments made in class by students. In some courses the most important learning takes place in research projects. How do you have a competency-based speech course?

    Time and time again, CPA firms have learned that the best employees are not always medal winners on the CPA examination. For example, years and years ago a medal winner on occasion only took correspondence courses. And in some of those instances the medal winner did not perform well on the job in part because the interactive and team skills were lacking that in most instances are part of onsite and online education.

    Note that distance education courses that are well done require student interactions and often team projects. It is not necessary to acquire such skills face-to-face. It is necessary, however, to require such interactions in a great distance education course.

    A USNH College for America accounting graduate may not be allowed to sit for the CPA examination in some states, especially Texas. Texas requires a least 15 credits be taken onsite face-to-face in traditional courses on campus. Actually I cannot find where an accounting degree is even available from the USNH College for America degree programs.


    "Green Light for Competency-Based Ed at Capella," Inside Higher Ed, May 23, 2013 ---
    http://www.insidehighered.com/quicktakes/2013/05/23/green-light-competency-based-ed-capella

    Jensen Comment
    I anticipate that a lot of for-profit universities will be following Capella's lead on this. However, the in recent years the lead has been taken by public universities like Western Governor's University, the University of Wisconsin, and the University of Akron. Also early non-profit competency-based universities include the University of Southern New Hampshire and the Chartered School of Accouancy masters program in Western Canada.


    Wisconsin System's Competency-Based Degrees as of November 28, 2012 ---
    http://www.wisconsin.edu/news/2012/r121128.htm


    "Study: Little Difference in Learning in Online and In-Class Science Courses," Inside Higher Ed, October 22, 2012 ---
    http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses

    A study in Colorado has found little difference in the learning of students in online or in-person introductory science courses. The study tracked community college students who took science courses online and in traditional classes, and who then went on to four-year universities in the state. Upon transferring, the students in the two groups performed equally well. Some science faculty members have expressed skepticism about the ability of online students in science, due to the lack of group laboratory opportunities, but the programs in Colorado work with companies to provide home kits so that online students can have a lab experience.
     

     

    Jensen Comment
    Firstly, note that online courses are not necessarily mass education (MOOC) styled courses. The student-student and student-faculty interactions can be greater online than onsite. For example, my daughter's introductory chemistry class at the University of Texas had over 600 students. On the date of the final examination he'd never met her and had zero control over her final grade. On the other hand, her microbiology instructor in a graduate course at the University of Maine became her husband over 20 years ago.

    Another factor is networking. For example, Harvard Business School students meeting face-to-face in courses bond in life-long networks that may be stronger than for students who've never established networks via classes, dining halls, volley ball games, softball games, rowing on the Charles River, etc. There's more to lerning than is typically tested in competency examinations.

    My point is that there are many externalities to both onsite and online learning. And concluding that there's "little difference in learning" depends upon what you mean by learning. The SCALE experiments at the University of Illinois found that students having the same instructor tended to do slightly better than onsite students. This is partly because there are fewer logistical time wasters in online learning. The effect becomes larger for off-campus students where commuting time (as in Mexico City) can take hours going to and from campus.
    http://faculty.trinity.edu/rjensen/255wp.htm

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/Assess.htm


    "Innovations in Higher Education? Hah! College leaders need to move beyond talking about transformation before it's too late," by Ann Kirschner, Chronicle of Higher Education, April 8, 2012 ---
    http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en

    . . .

    (Conclusion)
    Some of the most interesting work begins in the academy but grows beyond it. "Scale" is not an academic value—but it should be. Most measures of prestige in higher education are based on exclusivity; the more prestigious the college, the larger the percentage of applicants it turns away. Consider the nonprofit Khan Academy, with its library of more than 3,000 education videos and materials, where I finally learned just a little about calculus. In the last 18 months, Khan had 41 million visits in the United States alone. It is using the vast data from that audience to improve its platform and grow still larger. TED, the nonprofit devoted to spreading ideas, just launched TED-Ed, which uses university faculty from around the world to create compelling videos on everything from "How Vast Is the Universe?" to "How Pandemics Spread." Call it Khan Academy for grown-ups. The Stanford University professor Sebastian Thrun's free course in artificial intelligence drew 160,000 students in more than 190 countries. No surprise, the venture capitalists have come a-calling, and they are backing educational startups like Udemy and Udacity.

    All of those are signposts to a future where competency-based credentials may someday compete with a degree.

    At this point, if you are affiliated with an Ivy League institution, you'll be tempted to guffaw, harrumph, and otherwise dismiss the idea that anyone would ever abandon your institution for such ridiculous new pathways to learning. You're probably right. Most institutions are not so lucky. How long will it take for change to affect higher education in major ways? Just my crystal ball, but I would expect that institutions without significant endowments will be forced to change by 2020. By 2025, the places left untouched will be few and far between.

    Here's the saddest fact of all: It is those leading private institutions that should be using their endowments and moral authority to invest in new solutions and to proselytize for experimentation and change, motivated not by survival but by the privilege of securing the future of American higher education.

    The stakes are high. "So let me put colleges and universities on notice," President Obama said in his recent State of the Union address. "If you can't stop tuition from going up, the funding you get from taxpayers will go down." Because of the academy's inability to police itself and improve graduation rates, and because student debt is an expedient political issue, the Obama administration recently threatened to tie colleges' eligibility for campus-based aid programs to institutions' success in improving affordability and value for students.

    Whether the president's threat is fair or not, it will not transform higher education. Change only happens on the ground. Despite all the reasons to be gloomy, however, there is room for optimism. The American university, the place where new ideas are born and lives are transformed, will eventually focus that lens of innovation upon itself. It's just a matter of time.

     

    Jensen Comment
    This a long and important article for all educators to carefully read. Onsite colleges have always served many purposes, but one purpose they never served is to be knowledge fueling stations where students go to fill their tanks. At best colleges put a shot glass of fuel in a tanks with unknown capacities.

    Students go to an onsite college for many reasons other than to put fuel in their knowledge tanks. The go to live and work in relatively safe transitional environments between home and the mean streets. They go to mature, socialize, to mate, drink, laugh, leap over hurdles societies place in front of career paths, etc. The problem in the United States is that college onsite living and education have become relatively expensive luxuries. Students must now make more painful decisions as to how much to impoverish their parents and how deeply go into debt.

    I have a granddaughter 22 years old majoring in pharmacy (six year program). She will pay off her student loans before she's 50 years old if she's lucky. Some older students who've not been able to pay off their loans are becoming worried that the Social Security Administration will garnish their retirement Social Security monthly payments for unpaid student loans.

    We've always known that colleges are not necessary places for learning and scholarship. Until 43 years ago (when the Internet was born) private and public libraries were pretty darn necessary for scholarship. Now the Internet provides access to most known knowledge of the world.  But becoming a scholar on the Internet is relatively inefficient and overwhelming without the aid of distillers of knowledge, which is where onsite and online college courses can greatly add to efficiency of learning.

    But college courses can be terribly disappointing as distillers of knowledge. For one thing, grade inflation disgracefully watered down the amount of real fuel in that shot glass of knowledge provided in a college course ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
    Grades rather than learning became the tickets to careers and graduate schools, thereby, leading to street-smart cheating taking over for real learning perspiration ---
    http://faculty.trinity.edu/rjensen/Plagiarism.htm

    When 80% of Harvard's graduating class graduates cum laude, we no longer identify which graduates are were the best scholars in their class.

    Soon those graduates from Harvard, Florida A&M University, Capella University, and those who learned on their own from free courses, video lectures, and course materials on the Web will all face some sort of common examinations (written and oral) of their competencies in specialties. Competency testing will be the great leveler much like licensure examinations such as the Bar Exam, the CPA exam, the CFA exam, etc. are graded on the basis of what you know rather than where you learned what you know. It won't really matter whether you paid a fortune to learn Bessel Functions onsite at MIT or for free from the MITx online certificate program ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

    If you are an educator or are becoming an educator, please read:
    "Innovations in Higher Education? Hah! College leaders need to move beyond talking about transformation before it's too late," by Ann Kirschner, Chronicle of Higher Education, April 8, 2012 ---
    http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en 


    At the University of Wisconsin
    "Online Degree Program Lets Students Test Out of What They Already Know," by Angela Chen, June 20, 2012 --- Click Here
    http://chronicle.com/blogs/wiredcampus/online-degree-program-lets-students-test-out-of-what-they-already-know/37097?cid=wc&utm_source=wc&utm_medium=en

    The University of Wisconsin plans to start a “flexible degree” program online focused on allowing undergraduates to test out of material they have mastered.

    The new program, geared toward working adults with some college education, operates under a “competency based” model, said Raymond Cross, chancellor of the University of Wisconsin Colleges and University of Wisconsin-Extension. This model is similar to the Advanced Placement program, in which high-school students take AP tests to pass out of college-level courses.

    In the university’s new program, college courses will be broken down into units. For example, a higher-level mathematics class could include units such as linear algebra and trigonometry. Students can then test out of certain units (instead of full courses) and spend time learning only material that is new to them. Eventually, the units will build into courses, and then a degree. The flexible-degree program and traditional-degree program will have identical course requirements, and since each flexible degree will be associated with a specific campus, the student will receive a diploma from the originating campus and not from the system.

    “We’re trying to find ways to reduce the cost of education,” Mr. Cross said. “Implicit in the model is the idea that you can take lectures online from free sources—like Khan Academy and MITx—and prepare yourself for the competency test. Then take the remaining courses online at UW.”

    The biggest challenge, he says, is determining how to best test competency. Some units will require tests, while others may require written papers or laboratory work. The difficulty of measuring “competency’” for any unit will affect the program’s pricing structure, which has not yet been determined.

    The idea of competency-based credentials is common in technical and health fields, Mr. Cross said, but it is rare at traditional universities. The program is part of a push to encourage Wisconsin’s 700,000 college dropouts to go back to a university.

    “With higher ed now, people often have a piece or two missing in their education, so we are responding to the changes in our culture and helping them pull all these pieces together,” Mr. Cross said. “Students already interface with a lot of different institutions and different classes and professors, and this will help that process. I don’t think this diminishes traditional higher ed at all. I think it’ll enhance it.”

    The first courses in the flexible-degree program will be available starting in fall 2013. The university is still developing exact degree specifications, Mr. Cross said. Likely degrees include business management and information technology.

    Bob Jensen's threads on distance education training and education alternatives ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm


    "Score One for the Robo-Tutors," by Steve Kolowich, Inside Higher Ed, May 22, 2012 ---
    http://www.insidehighered.com/news/2012/05/22/report-robots-stack-human-professors-teaching-intro-stats

    Without diminishing learning outcomes, automated teaching software can reduce the amount of time professors spend with students and could substantially reduce the cost of instruction, according to new research.

    In experiments at six public universities, students assigned randomly to statistics courses that relied heavily on “machine-guided learning” software -- with reduced face time with instructors -- did just as well, in less time, as their counterparts in traditional, instructor-centric versions of the courses. This largely held true regardless of the race, gender, age, enrollment status and family background of the students.

    The study comes at a time when “smart” teaching software is being increasingly included in conversations about redrawing the economics of higher education. Recent investments by high-profile universities in “massively open online courses,” or MOOCs, has elevated the notion that technology has reached a tipping point: with the right design, an online education platform, under the direction of a single professor, might be capable of delivering meaningful education to hundreds of thousands of students at once.

    The new research from the nonprofit organization Ithaka was seeking to prove the viability of a less expansive application of “machine-guided learning” than the new MOOCs are attempting -- though one that nevertheless could have real implications for the costs of higher education.

    The study, called “Interactive Learning Online at Public Universities,” involved students taking introductory statistics courses at six (unnamed) public universities. A total of 605 students were randomly assigned to take the course in a “hybrid” format: they met in person with their instructors for one hour a week; otherwise, they worked through lessons and exercises using an artificially intelligent learning platform developed by learning scientists at Carnegie Mellon University’s Open Learning Initiative.

    Researchers compared these students against their peers in the traditional-format courses, for which students met with a live instructor for three hours per week, using several measuring sticks: whether they passed the course, their performance on a standardized test (the Comprehensive Assessment of Statistics), and the final exam for the course, which was the same for both sections of the course at each of the universities.

    The results will provoke science-fiction doomsayers, and perhaps some higher-ed traditionalists. “Our results indicate that hybrid-format students took about one-quarter less time to achieve essentially the same learning outcomes as traditional-format students,” report the Ithaka researchers.

    The robotic software did have disadvantages, the researchers found. For one, students found it duller than listening to a live instructor. Some felt as though they had learned less, even if they scored just as well on tests. Engaging students, such as professors might by sprinkling their lectures with personal anecdotes and entertaining asides, remains one area where humans have the upper hand.

    But on straight teaching the machines were judged to be as effective, and more efficient, than their personality-having counterparts.

    It is not the first time the software used in the experiment, developed over the last five years or so by Carnegie Mellon’s Open Learning Initiative, has been proven capable of teaching students statistics in less time than a traditional course while maintaining learning outcomes. So far that research has failed to persuade many traditional institutions to deploy the software -- ostensibly for fear of shortchanging students and alienating faculty with what is liable to be seen as an attempt to use technology as a smokescreen for draconian personnel cuts.

    But the authors of the new report, led by William G. Bowen, the former president of Princeton University, hope their study -- which is the largest and perhaps the most rigorous to date on the effectiveness of machine-guided learning -- will change minds.

    “As several leaders of higher education made clear to us in preliminary conversations, absent real evidence about learning outcomes there is no possibility of persuading most traditional colleges and universities, and especially those regarded as thought leaders, to push hard for the introduction of [machine-guided] instruction” on their campuses.

    Continued in article

    "‘Free-Range Learners’: Study Opens Window Into How Students Hunt for Educational Content Online," by Marc Parry, Chronicle of Higher Education, April 25, 2012 --- Click Here
    http://chronicle.com/blogs/wiredcampus/free-range-learners-study-opens-window-into-how-students-hunt-for-educational-content-online/36137?sid=wc&utm_source=wc&utm_medium=en

    Bob Jensen's threads on Tools and Tricks of the Trade are at
    http://faculty.trinity.edu/rjensen/000aaa/thetools.htm

    Bob Jensen's threads on the explosion of distance education and training ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation


    Outcomes Assessment

    March 10, 2012 message from Penny Hanes

    Can anyone point me to some good information on course specific outcomes assessment in an accounting program?

    Penny Hanes,
    Associate Professor
    Mercyhurst University

    March 11. 2012 reply from Bob Jensen

    Hi Penny,

    Respondus has some testing software:

    October 13, 2009 message from Richard Campbell [campbell@RIO.EDU]

    For anyone teaching online, this software is a "must-have". They have released a new (4.0) version with improved integration of multimedia. Below are some videos (created in Camtasia) that demonstrate key features of the software.

    http://www.respondus.com/

    They have tightened up the integration with publisher test banks.
    Richard J. Campbell

    mailto:campbell@rio.edu

    Bob Jensen's threads for online assessment are at
    http://faculty.trinity.edu/rjensen/Assess.htm#Examinations

    There are different levels that you can approach such a topic. Many are based on the mastery learning theory of Benjamin Bloom ---
    http://en.wikipedia.org/wiki/Benjamin_Bloom


    The best known accounting course assessment experiment using Bloom's Taxonomy, for an set of courses for an entire program, was funded by an Accounting Education Change Commission (AECC) grant to a very fine accounting program at Kansas State University. The results of this and the other AECC experiences are available from the AAA (ISBN 0-86539-085-1) ---
    http://aaahq.org/AECC/changegrant/cover.htm
    The KSU outcomes are reported in Chapter 3 ---
    http://aaahq.org/AECC/changegrant/chap3.htm
    I think Lynn Thomas at KSU was one of the principal investigators.

    Michael Krause, Le Moyne College, has conducted some AAA programs on Bloom's Taxonomy assessment.
    Susan A. Lynn, University of Baltimore, has done some of this assessment for intermediate accounting.
    Susan Wolcott, Canada's Chartered Accountancy School of Business, has delved into critical thinking assessment in accounting courses

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/Assess.htm

     

     

     


    "A Measure of Education Is Put to the Test Results of national exam will go public in 2012," by David Glenn, Chronicle of Higher Education, September 19, 2010 --- http://chronicle.com/article/A-Measure-of-Learning-Is-Put/124519/

    You have 90 minutes to complete this test.

    Here is your scenario: You are the assistant to a provost who wants to measure the quality of your university's general-education program. Your boss is considering adopting the Collegiate Learning Assessment, or CLA, a national test that asks students to demonstrate their ability to synthesize evidence and write persuasively.

    The CLA is used at more than 400 colleges. Since its debut a decade ago, it has been widely praised as a sophisticated alternative to multiple-choice tests. At some colleges, its use has helped spark sweeping changes in instruction and curriculum. And soon, many more of the scores will be made public.

    But skeptics say the test is too detached from the substantive knowledge that students are actually expected to acquire. Others say those who take the test have little motivation to do well, which makes it tough to draw conclusions from their performance.

    You may review the following documents:

    Graphs of Collegiate Learning Assessment scores on the University of Texas system's campuses over a four-year period. An essay in which an assistant provost at a flagship campus describes her "grave concerns" about using CLA scores to compare different colleges. A report in which the CLA's creators reply to their critics. Your task: Write a two-page memorandum to your boss that describes and analyzes the major arguments for and against adopting the CLA. When you have finished, please hand your materials to the proctor and leave the room quietly.

    It is easy to see why the test format that you just tasted has been so appealing to many people in higher education. The CLA is a direct measure of skills, in contrast to surveys about how much time students spend studying or how much they believe they have learned. And unlike multiple-choice-based measures of learning, the CLA aspires to capture a student's ability to make an argument and to interpret multiple types of evidence. Those skills are close to the heart of a liberal-arts education.

    "Everything that No Child Left Behind signified during the Bush administration—we operate 180 degrees away from that," says Roger Benjamin, president of the Council for Aid to Education, which developed and promotes the CLA. "We don't want this to be a high-stakes test. We're putting a stake in the ground on classic liberal-arts issues. I'm willing to rest my oar there. These core abilities, these higher-order skills, are very important, and they're even more important in a knowledge economy where everyone needs to deal with a surplus of information." Only an essay test, like the CLA, he says, can really get at those skills.

    Richard J. Shavelson, an educational psychologist at Stanford University and one of the CLA's creators, makes a similar point in his recent book, Measuring College Learning Responsibly: Accountability in a New Era (Stanford University Press). "If you want to find out not only whether a person knows the laws governing driving but also whether she can actually drive a car," he writes, "don't judge her performance solely with a multiple-choice test. Rather, also administer a behind-the-wheel driving test."

    "The CLA is really an authentic assessment process," says Pedro Reyes, associate vice chancellor for academic planning and assessment at the University of Texas system. "The Board of Regents here saw that it would be an important test because it measures analytical ability, problem-solving ability, critical thinking, and communication. Those are the skills that you want every undergraduate to walk away with." (Other large systems that have embraced the CLA include California State University and the West Virginia system.)

    One feature that appealed to Mr. Reyes and his colleagues is that the CLA typically reports scores on a "value added" basis, controlling for the scores that students earned on the SAT or ACT while in high school. In raw terms, the highest scores in the Texas system are at Austin and Dallas, the most-selective campuses. But in value-added terms, it appears that students at San Antonio and El Paso make stronger gains between their freshman and senior years.

    The CLA's overseers, however, say they do not want colleges to become overly concerned with bean-counting and comparing public scores. Instead, they emphasize the ways in which colleges can use their own CLA scores to experiment with improved models of instruction. Since 2007, Mr. Benjamin's organization has invested heavily in "performance-task academies," which encourage colleges to add CLA-style assignments to their liberal-arts courses.

    One campus that has gone down that road is the University of Evansville, where first-year-experience courses have begun to ask students to do performance tasks.

    "We began by administering a retired CLA question, a task that had to do with analyzing crime-reduction strategies," says Brian R. Ernsting, an associate professor of biology at Evansville. "We talked with the students about the modes of thinking that were involved there, how to distinguish correlation from causation and anecdotes from data."

    Similar things are happening at Pacific Lutheran University. "Our psychology department is working on a performance task that mirrors the CLA, but that also incorporates disciplinary content in psychology," says Karen E. McConnell, director of assessment. "They're planning to make that part of their senior capstone course."

    How to Interpret the Scores? Mr. Ernsting and Ms. McConnell are perfectly sincere about using CLA-style tasks to improve instruction on their campuses. But at the same time, colleges have a less high-minded motive for familiarizing students with the CLA style: It just might improve their scores when it comes time to take the actual test.

    And that matters, in turn, because by 2012, the CLA scores of more than 100 colleges will be posted, for all the world to see, on the "College Portrait" Web site of the Voluntary System of Accountability, an effort by more than 300 public colleges and universities to provide information about life and learning on their campuses. (Not all of the colleges have adopted the CLA. Some use the Educational Testing Service's "Proficiency Profile," and others use the ACT's Collegiate Assessment of Academic Proficiency.)

    A few dozen colleges in the voluntary project, including those in the Texas system, have already made their test scores public. But for most, the 2012 unveiling will be a first.

    "If a college pays attention to learning and helps students develop their skills—whether they do that by participating in our programs or by doing things on their own—they probably should do better on the CLA," says Marc Chun, a research scientist at the Council for Aid to Education. Such improvements, he says, are the main point of the project.

    But that still raises a question: If familiarizing students with CLA-style tasks does raise their scores, then the CLA might not be a pure, unmediated reflection of the full range of liberal-arts skills. How exactly should the public interpret the scores of colleges that do not use such training exercises?

    Trudy W. Banta, a professor of higher education and senior adviser to the chancellor for academic planning and evaluation at Indiana University-Purdue University at Indianapolis, believes it is a serious mistake to publicly release and compare scores on the test. There is too much risk, she says, that policy makers and the public will misinterpret the numbers.

    "Standardized tests of generic skills—I'm not talking about testing in the major—are so much a measure of what students bring to college with them that there is very little variance left out of which we might tease the effects of college," says Ms. Banta, who is a longtime critic of the CLA. "There's just not enough variance there to make comparative judgments about the comparative quality of institutions."

    Compounding that problem, she says, is the fact that most colleges do not use a true longitudinal model: That is, the students who take the CLA in their first year do not take it again in their senior year. The test's value-added model is therefore based on a potentially apples-and-oranges comparison.

    The test's creators reply that they have solved that problem by doing separate controls for the baseline skills of freshman test-takers and senior test-takers. That is, the freshman test-takers' scores are assessed relative to their SAT and ACT scores, and so are senior test-takers' scores. For that reason, colleges cannot game the test by recruiting an academically weak pool of freshmen and a strong pool of seniors.

    Another concern is that students do not always have much motivation to take the test seriously. That problem is especially challenging with seniors, who are typically recruited to take the CLA toward the end of their final semester, when they can already taste the graduation champagne. Who at that stage of college wants to carefully write a 90-minute essay that isn't required for any course?

    For that reason, many colleges have had to come up with elaborate incentives to get students to take the test at all. (See the graphic below.) A recent study at Central Connecticut State University found that students' scores were highly correlated with how long they had spent writing their essays.

    Take My Test — Please The Collegiate Learning Assessment has been widely praised. But it involves an arduous 90 minutes of essay writing. As a result, many colleges have resorted to incentives and requirements to get students to take the test, and to take it seriously.

    As of last week, there were some significant bugs in the presentation of CLA scores on the College Portrait Web site. Of the few dozen universities that had already chosen to publish CLA data on that site, roughly a quarter of the reports appeared to include erroneous descriptions of the year-to-year value-added scores. In some cases, the errors made the universities' gains appear better than they actually were. In other cases, they made them seem worse.

    Seniors at California State University at Bakersfield, for example, had CLA scores that were 155 points higher than freshmen's, while the two cohorts' SAT scores were similar. The College Portrait site said that the university's score gains were "below what would be expected." The University of Missouri at St. Louis, meanwhile, had senior scores that were only 64 points higher than those of freshmen, and those two cohorts had identical ACT scores. But those score gains were reported as "well above what would be expected."

    "It doesn't make sense, what's presented here," said Stephen Klein, the CLA's director of research and development, when The Chronicle pointed out such discrepancies. "This doesn't look like something we would produce." Another official at the Council for Aid to Education confirmed that at least three of the College Portrait reports were incorrect, and said there appeared to be systematic problems with the site's presentation of the data.

    As The Chronicle went to press, the Voluntary System of Accountability's executive director, Christine M. Keller, said her office would identify and fix any errors. The forms that institutions fill out for the College Portrait, she said, might be confusing for administrators because they do not always mirror the way the CLA itself (and the Collegiate Assessment of Academic Proficiency and ETS's Proficiency Profile) present their official data. In any case, Ms. Keller said, a revised version of the College Portrait site is scheduled to go online in December.

    It is clear that CLA scores do reflect some broad properties of a college education. In a study for their forthcoming book, Academically Adrift: Limited Learning on College Campuses (University of Chicago Press), the sociologists Richard Arum and Josipa Roksa asked students at 24 colleges to take the CLA during their first semester and then again during their fourth. Their study was conducted before any significant number of colleges began to consciously use CLA-style exercises in the classroom.

    The two authors found one clear pattern: Students' CLA scores improved if they took courses that required a substantial amount of reading and writing. Many students didn't take such courses, and their CLA scores tended to stay flat.

    The pattern was consistent across the ability spectrum: Regardless of whether a student's CLA scores were generally low or high, their scores were more likely to improve if they had taken demanding college courses.

    So there is at least one positive message in Mr. Arum and Ms. Roksa's generally gloomy book. Colleges that make demands on students can actually develop their skills on the kinds of things measured by the CLA.

    "We found that students in traditional liberal-arts fields performed and improved more over time on the CLA," says Mr. Arum, a professor at New York University. "In other fields, in education, business, and social work, they didn't do so well. Some of that gap we can trace back to time spent studying. That doesn't mean that students in education and business aren't acquiring some very valuable skills. But at the same time, the communication and reasoning skills measured by the CLA really are important to everyone."

    Dueling Purposes For more than a century, scholars have had grand visions of building national tests for measuring college-level learning. Mr. Shavelson, of Stanford, sketches several of those efforts in his book, including a 1930s experiment that tested thousands of students at colleges throughout Pennsylvania. (Sample question: "Of Corneille's plays, 1. Polyeucte, 2. Horace, 3. Cinna, 4. Le Cid shows least the influence of classical restraint.")

    Mr. Shavelson believes the CLA's essays and "performance tasks" offer an unusually sophisticated way of measuring what colleges do, without relying too heavily on factual knowledge from any one academic field. But in his book he also notes the tension between the two basic uses of nationally normed tests: Sometimes they're used for internal improvements, and sometimes they're used as benchmarks for external comparisons. Those two uses don't always sit easily together. Politicians and consumers want easily interpretable scores, while colleges need subtler and more detailed data to make internal improvements.

    Can the CLA fill both of those roles? That is the experiment that will play out as more colleges unveil their scores.

    Teaching to the Test Somewhat
    "An Assessment Test Inspires Tools for Teaching," by David Glenn. Chronicle of Higher Education, September 19, 2010 ---
    http://chronicle.com/article/An-Assessment-Test-Inspires/124537/


    "Oregon Trains Educators to Improve Learning for All Students," by Tanya Roscorla, Converge Magazine, January 6, 2012 ---
    http://www.convergemag.com/curriculum/Oregon-DATA-Year5.html?elq=1e13f85f2dc34e84b8b1397c797c2f58

    For years, Oregon school districts have collected student test data. In field assessments, the Oregon Education Department found that 125 different assessments existed in the state to track student progress.

    But the data sat in warehouses, unused or misused. Teachers and administrators didn't know how to easily find, analyze and use student assessment results to inform instruction, said Mickey Garrison, data literacy director for the Oregon Department of Education.

    Five years ago, the department started the Oregon Direct Access to Achievement Project with a $4.7 million federal grant to improve student learning. This week, the project is publishing its Year 5 report.

    Through the project, Oregon now has an adaptable data framework and a network for districts that connects virtual teams of administrators and teachers around the state. The framework has also helped the state mesh the Common Core State Standards with its own.

    "Moving ideas from paper into practice is not something that I'm gonna say we in education have necessarily done a good job of in the past, but the model that we created for data definitely goes deep into implementation, and that's essential," Garrison said.

    Continued in article


    The problem is that our students choose very bland, low nourishment diets in our modern day smorgasbord curricula. Their concern is with their grade averages rather than their education. And why not? Grades for students and turf for faculty have become the keys to the kingdom!
    Bob Jensen

    "Are Undergraduates Actually Learning Anything?" by Richard Arum and Josipa Roksa. Chronicle of Higher Education, January 18, 2011 ---
    http://chronicle.com/article/Are-Undergraduates-Actually/125979/

    Drawing on survey responses, transcript data, and results from the Collegiate Learning Assessment (a standardized test taken by students in their first semester and at the end of their second year), Richard Arum and Josipa Roksa concluded that a significant percentage of undergraduates are failing to develop the broad-based skills and knowledge they should be expected to master. Here is an excerpt from Academically Adrift: Limited Learning on College Campuses (University of Chicago Press), their new book based on those findings.

    Continued in article

    Our Compassless Colleges: What are students really not learning?
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#Berkowitz


    What questions might classroom teachers ask of their students,
    the answers to which would allow a strong inference that the students "understood"?

    "The Assessment of “Understanding,” by Lloyd Bond, Carnegie Foundation for Advancement in Teaching --- Click Here

    Study to remember and you will forget.
    Study to understand and you will remember.
    —Anonymous

    I once sat on the dissertation committee of a graduate student in mathematics education who had examined whether advanced graduate students in math and science education could explain the logic underlying a popular procedure for extracting square roots by hand. Few could explain why the procedure worked. Intrigued by the results, she decided to investigate whether they could explain the logic underlying long division. To her surprise, most in her sample could not. All of the students were adept at division, but few understood why the procedure worked.

    In a series of studies at Johns Hopkins University, researchers found that first year physics students could unerringly solve fairly sophisticated problems in classical physics involving moving bodies, but many did not understand the implications of their answers for the behavior of objects in the real world. For example, many could not draw the proper trajectories of objects cut from a swinging pendulum that their equations implied.

    What then does it mean to “understand” something—a concept, a scientific principle, an extended rhetorical argument, a procedure or algorithm? What questions might classroom teachers ask of their students, the answers to which would allow a strong inference that the students “understood”? Every educator from kindergarten through graduate and professional school must grapple almost daily with this fundamental question. Do my students really “get it”? Do they genuinely understand the principle I was trying to get across at a level deeper than mere regurgitation? Rather than confront the problem head on, some teachers, perhaps in frustration, sidestep it. Rather then assign projects or construct examinations that probe students’ deep understanding, they require only that students apply the learned procedures to problems highly similar to those discussed in class. Other teachers with the inclination, time and wherewithal often resort to essay tests that invite their students to probe more deeply, but as often as not their students decline the invitation and stay on the surface.

    I have thought about issues surrounding the measurement of understanding on and off for years, but have not systematically followed the literature on the topic. On a lark, I conducted three separate Google searches and obtained the following results:

    Even with the addition of “classroom” to the search, the number of hits exceeded 9,000 for each search. The listings covered the spectrum—from suggestions to elementary school teachers on how to detect “bugs” in children’s understanding of addition and subtraction, to discussions of laboratory studies of brain activity during problem solving, to abstruse philosophical discussions in hermeneutics and epistemology. Clearly, this approach was taking me everywhere, which is to say, nowhere.

    Fully aware that I am ignoring much that has been learned, I decided instead to draw upon personal experience—some 30 years in the classroom—to come up with a list of criteria that classroom teachers might use to assess understanding. The list is undoubtedly incomplete, but it is my hope that it will encourage teachers to not only think more carefully about how understanding might be assessed, but also—and perhaps more importantly—encourage them to think more creatively about the kinds of activities they assign their classes. These activities should stimulate students to study for understanding, rather than for mere regurgitation at test time.

    The student who understands a principle, rule, procedure or concept should be able to do the following tasks (these are presented in no particular order and their actual difficulties are an empirical question):

    Construct problems that illustrate the concept, principle, rule or procedure in question.
    As the two anecdotes above illustrate, students may know how to use a procedure or solve specific textbook problems in a domain, but may still not fully understand the principle involved. A more stringent test of understanding would be that they can construct problems themselves that illustrate the principle. In addition to revealing much to instructors about the nature of students’ understanding, problem construction by students can be a powerful learning experience in its own right, for it requires the student to think carefully about such things as problem constraints and data sufficiency.

    Identify and, if possible, correct a flawed application of a principle or procedure.
    This is basically a check on conceptual and procedural knowledge. If a student truly understands a concept, principle or procedure, she should be able to recognize when it is faithfully and properly applied and when it is not. In the latter case, she should be able to explain and correct the misapplication.

    Distinguish between instances and non-instances of a principle; or stated somewhat differently, recognize and explain “problem isomorphs,” that is, problems that differ in their context or surface features, but are illustrations of the same underlying principle.
    In a famous and highly cited study by Michelene Chi and her colleagues at the Learning Research and Development Center, novice physics students and professors of physics were each presented with problems typically found in college physics texts and asked to sort or categorized them into groups that “go together” in some sense. They were then asked to explain the basis for their categorization. The basic finding (since replicated in many different disciplines) was that the novice physics students tended to sort problems on the basis of their surface features (e.g., pulley problems, work problems), whereas the experts tended to sort problems on the basis of their “deep structure,” the underlying physical laws that they illustrated (e.g., Newton’s third law of motion, the second law of thermodynamics). This profoundly revealing finding is usually discussed in the context of expert-novice comparisons and in studies of how proficiency develops, but it is also a powerful illustration of deep understanding.

    Explain a principle or concept to a naïve audience.
    One of the most difficult questions on an examination I took in graduate school was the following: “How would you explain factor analysis to your mother?” That I remember this question over 30 years later is strong testimony to the effect it had on me. I struggled mightily with it. But the question forced me to think about the underlying meaning of factor analysis in ways that had not occurred to me before.

    Mathematics educator and researcher, Liping Ma, in her classic exposition Knowing and Teaching Elementary Mathematics (Lawrence Erlbaum, 1999), describes the difficulty some fifth and sixth grade teachers in the United States encounter in explaining fundamental mathematical concepts to their charges. Many of the teachers in her sample, for example, confused division by 1/2 with division by two. The teachers could see on a verbal level that the two were different but they could neither explain the difference nor the numerical implications of that difference. It follows that they could not devise simple story problems and other exercises for fifth and sixth graders that would demonstrate the difference.

    To be sure, students may well understand a principle, procedure or concept without being able to do all of the above. But a student who can do none of the above almost certainly does not understand, and students who can perform all of the above tasks flawlessly almost certainly do understand.

    Continued in article

    Jensen Comment
    This is a huge problem in accounting education, because so many of us teach "how to" procedures, often very complex procedures, without really knowing whether our students truly understand the implications of what they are doing for decision makers who use accounting information, for fraud detection, for fraud prevention, etc. For example, when teaching rules for asset capitalization versus expensing, it might help students better understand if they simultaneously learned about how and why Worldcom understated earnings by over a billion dollars by capitalizing expenditures that should have been expensed --- http://faculty.trinity.edu/rjensen/FraudEnron.htm#WorldCom

    Also see http://faculty.trinity.edu/rjensen/265wp.htm


    Education is an admirable thing, but it is well to remember from time to time that nothing that is worth learning can be taught.
    Oscar Wilde

    "The Objective of Education is Learning, Not Teaching (audio version available)," University of Pennsylvania's Knowledge@Wharton, August 20, 2008 --- http://knowledge.wharton.upenn.edu/article.cfm;jsessionid=9a30b5674a8d333e4d18?articleid=2032

    In their book, Turning Learning Right Side Up: Putting Education Back on Track, authors Russell L. Ackoff and Daniel Greenberg point out that today's education system is seriously flawed -- it focuses on teaching rather than learning. "Why should children -- or adults -- be asked to do something computers and related equipment can do much better than they can?" the authors ask in the following excerpt from the book. "Why doesn't education focus on what humans can do better than the machines and instruments they create?"

    "Education is an admirable thing, but it is well to remember from time to time that nothing that is worth learning can be taught."
       -- Oscar Wilde

    Traditional education focuses on teaching, not learning. It incorrectly assumes that for every ounce of teaching there is an ounce of learning by those who are taught. However, most of what we learn before, during, and after attending schools is learned without its being taught to us. A child learns such fundamental things as how to walk, talk, eat, dress, and so on without being taught these things. Adults learn most of what they use at work or at leisure while at work or leisure. Most of what is taught in classroom settings is forgotten, and much or what is remembered is irrelevant.

    In most schools, memorization is mistaken for learning. Most of what is remembered is remembered only for a short time, but then is quickly forgotten. (How many remember how to take a square root or ever have a need to?) Furthermore, even young children are aware of the fact that most of what is expected of them in school can better be done by computers, recording machines, cameras, and so on. They are treated as poor surrogates for such machines and instruments. Why should children -- or adults, for that matter -- be asked to do something computers and related equipment can do much better than they can? Why doesn't education focus on what humans can do better than the machines and instruments they create?

    When those who have taught others are asked who in the classes learned most, virtually all of them say, "The teacher." It is apparent to those who have taught that teaching is a better way to learn than being taught. Teaching enables the teacher to discover what one thinks about the subject being taught. Schools are upside down: Students should be teaching and faculty learning.

    After lecturing to undergraduates at a major university, I was accosted by a student who had attended the lecture. After some complimentary remarks, he asked, "How long ago did you teach your first class?"

    I responded, "In September of 1941."

    "Wow!" The student said. "You mean to say you have been teaching for more than 60 years?"

    "Yes."

    "When did you last teach a course in a subject that existed when you were a student?"

    This difficult question required some thought. After a pause, I said, "September of 1951."

    "Wow! You mean to say that everything you have taught in more than 50 years was not taught to you; you had to learn on your own?"

    "Right."

    "You must be a pretty good learner."

    I modestly agreed.

    The student then said, "What a shame you're not that good a teacher."

    The student had it right; what most faculty members are good at, if anything, is learning rather than teaching. Recall that in the one-room schoolhouse, students taught students. The teacher served as a guide and a resource but not as one who force-fed content into students' minds.

    Ways of Learning

    There are many different ways of learning; teaching is only one of them. We learn a great deal on our own, in independent study or play. We learn a great deal interacting with others informally -- sharing what we are learning with others and vice versa. We learn a great deal by doing, through trial and error. Long before there were schools as we know them, there was apprenticeship -- learning how to do something by trying it under the guidance of one who knows how. For example, one can learn more architecture by having to design and build one's own house than by taking any number of courses on the subject. When physicians are asked whether they leaned more in classes or during their internship, without exception they answer, "Internship."

    In the educational process, students should be offered a wide variety of ways to learn, among which they could choose or with which they could experiment. They do not have to learn different things the same way. They should learn at a very early stage of "schooling" that learning how to learn is largely their responsibility -- with the help they seek but that is not imposed on them.

    The objective of education is learning, not teaching.

    There are two ways that teaching is a powerful tool of learning. Let's abandon for the moment the loaded word teaching, which is unfortunately all too closely linked to the notion of "talking at" or "lecturing," and use instead the rather awkward phrase explaining something to someone else who wants to find out about it. One aspect of explaining something is getting yourself up to snuff on whatever it is that you are trying to explain. I can't very well explain to you how Newton accounted for planetary motion if I haven't boned up on my Newtonian mechanics first. This is a problem we all face all the time, when we are expected to explain something. (Wife asks, "How do we get to Valley Forge from home?" And husband, who does not want to admit he has no idea at all, excuses himself to go to the bathroom; he quickly Googles Mapquest to find out.) This is one sense in which the one who explains learns the most, because the person to whom the explanation is made can afford to forget the explanation promptly in most cases; but the explainers will find it sticking in their minds a lot longer, because they struggled to gain an understanding in the first place in a form clear enough to explain.

    The second aspect of explaining something that leaves the explainer more enriched, and with a much deeper understanding of the subject, is this: To satisfy the person being addressed, to the point where that person can nod his head and say, "Ah, yes, now I understand!" explainers must not only get the matter to fit comfortably into their own worldview, into their own personal frame of reference for understanding the world around them, they also have to figure out how to link their frame of reference to the worldview of the person receiving the explanation, so that the explanation can make sense to that person, too. This involves an intense effort on the part of the explainer to get into the other person's mind, so to speak, and that exercise is at the heart of learning in general. For, by practicing repeatedly how to create links between my mind and another's, I am reaching the very core of the art of learning from the ambient culture. Without that skill, I can only learn from direct experience; with that skill, I can learn from the experience of the whole world. Thus, whenever I struggle to explain something to someone else, and succeed in doing so, I am advancing my ability to learn from others, too.

    Learning through Explanation

    This aspect of learning through explanation has been overlooked by most commentators. And that is a shame, because both aspects of learning are what makes the age mixing that takes place in the world at large such a valuable educational tool. Younger kids are always seeking answers from older kids -- sometimes just slightly older kids (the seven-year old tapping the presumed life wisdom of the so-much-more-experienced nine year old), often much older kids. The older kids love it, and their abilities are exercised mightily in these interactions. They have to figure out what it is that they understand about the question being raised, and they have to figure out how to make their understanding comprehensible to the younger kids. The same process occurs over and over again in the world at large; this is why it is so important to keep communities multi-aged, and why it is so destructive to learning, and to the development of culture in general, to segregate certain ages (children, old people) from others.

    Continued in article

    Bob Jensen's threads on assessment, learning, and technology in education are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    In particular note the document on assessment --- http://faculty.trinity.edu/rjensen/assess.htm


    June 18, 2006 message from Bob Kennelly [bob_kennelly@YAHOO.COM]

    I am a data analyst with the Federal Government, recently assigned a project to integrate our accounting codes with XBRL accounting codes, primarily for the quarterly reporting of banking financial information.
     
    For the past few weeks, i've been searching the WEB looking for educational materials that will help us map, rollup and orr olldown the data that we recieve from the banks that we regulate, to the more generic XBRL accounting codes.
     
    Basically, i'm hoping to provide my team members with the tools to help them make more informed decisions on how to classify accounting codes and capture their findings for further review and discussion.
     
    To my suprise there isn't the wealth of accounting information that i thought there would be on the WEB, but i am very relieved to have found Bob Jensen's site and in particular an article which refers to the kind of information gathering
    approaches that i'm hoping to discover!
     
    Here is the brief on that article:
    "Using Hypertext in Instructional Material:  Helping Students Link Accounting Concept Knowledge to Case Applications," by Dickie Crandall and Fred Phillips, Issues in Accounting Education, May 2002, pp. 163-184
    ---
    http://accounting.rutgers.edu/raw/aaa/pubs.htm
     
    We studied whether instructional material that connects accounting concept discussions with sample case applications through hypertext links would enable students to better understand how concepts are to be applied to practical case situations.
     
    Results from a laboratory experiment indicated that students who learned from such hypertext-enriched instructional material were better able to apply concepts to new accounting cases than those who learned from instructional material that contained identical content but lacked the concept-case application hyperlinks. 
     
    Results also indicated that the learning benefits of concept-case application hyperlinks in instructional material were greater when the hyperlinks were self-generated by the students rather than inherited from instructors, but only when students had generated appropriate links. 
     
    Could anyone be so kind as to please suggest other references, articles or tools that will help us better understand and classify the broad range of accounting terminologies and methodologies please?
     
    For more information on XBRL, here is the XBRL link: http://xbrl.org
     
    Thanks very much!
    Bob Kennelly
    OFHEO

    June 19, 2006 reply from Bob Jensen

    Hi Bob,

    You may find the following documents of related interest:

    "Internet Financial Reporting: The Effects of Hyperlinks and Irrelevant Information on Investor Judgments," by Andrea S. Kelton (Ph.D. Dissertation at the University of Tennessee) --- http://www.mgt.ncsu.edu/pdfs/accounting/kelton_dissertation_1-19-06.pdf

    Extendible Adaptive Hypermedia Courseware: Integrating Different Courses and Web Material
    Lecture Notes in Computer Science,  Publisher: Springer Berlin / Heidelberg ISSN: 0302-9743 Subject: Computer Science Volume 1892 / 2000 Title: Adaptive Hypermedia and Adaptive Web-Based Systems: International Conference, AH 2000, Trento, Italy, August 2000. Proceedings Editors: P. Brusilovsky, O. Stock, C. Strapparava (Eds.) --- Click Here

    "Concept, Knowledge, and Thought," G. C. Oden, Annual Review of Psychology Vol. 38: 203-227 (Volume publication date January 1987) --- Click Here

    "A Framework for Organization and Representation of Concept Knowledge in Autonomous Agents," by Paul Davidsson,  Department of Computer Science, University of Lund, Box 118, S–221 00 Lund, Sweden email: Paul.Davidsson@dna.lth.se

    "Active concept learning for image retrieval in dynamic databases," by Dong, A. Bhanu, B. Center for Res. in Intelligent Syst., California Univ., Riverside, CA, USA; This paper appears in: Computer Vision, 2003. Proceedings. Ninth IEEE International Conference on Publication Date: 13-16 Oct. 2003 On page(s): 90- 95 vol.1 ISSN: ISBN: 0-7695-1950-4 --- Click Here

    "Types and qualities of knowledge," by Ton de Jong, ​‌Monica G.M. Ferguson-Hessler, Educational Psychologist 1996, Vol. 31, No. 2, Pages 105-113 --- Click Here

    Also note http://faculty.trinity.edu/rjensen/assess.htm#DownfallOfLecturing

    Hope this helps
    Bob Jensen


    Assessing-to-Learn Physics: Project Website --- http://a2l.physics.umass.edu/

    Bob Jensen's threads on science and medicine tutorials are at http://faculty.trinity.edu/rjensen/Bookbob2.htm#Science


    Tips on Preparing Multiple Choice Examinations

    Some great tips on preparing multiple choice examinations
    "Multiple Choice Exam Theory (Just In Time For The New Term)," by Jonathan Sterne, Chronicle of Higher Education, January 10, 2013 ---
    http://chronicle.com/blogs/profhacker/multiple-choice-exam-theory/45275?cid=wc&utm_source=wc&utm_medium=en

    [This is a guest post by Jonathan Sterne, an associate professor in the Department of Art History and Communication Studies at McGill University. His latest books are MP3: The Meaning of a Format (Duke University Press) and The Sound Studies Reader (Routledge). Find him online at http://sterneworks.org and follow him on Twitter @jonathansterne.--@JBJ]

    Every summer, before I assemble my fall courses, I read a book on pedagogy. Last summer’s choice is Cathy Davidson’s Now You See It (except I read it in the spring). Those who are familiar with critiques of mainstream educational practice will find many familiar arguments, but Now You See It crucially connects them with US educational policy. The book also challenges teachers who did not grow up online to think about what difference it makes that their students did. In particular, Davidson skewers pieties about attention, mastery, testing and evaluation.

    The one part of the book I couldn’t make my peace with was her critique of multiple choice testing. I agree in principle with everything she says, but what can you do in large lecture situations, where many of the small class principles—like the ones she put into practice for This Is Your Brain on the Internet—won’t work simply because of the scale of the operation?

    When I asked her about it, we talked about multiple choice approaches that might work. Clickers are currently popular in one corner of pedagogical theory for large lectures. Like many schools, McGill promotes them as a kind of participation (which is roughly at the level of voting on American Idol – except as Henry Jenkins shows, there’s a lot more affect invested there). I dislike clickers because they eliminate even more spontaneity from the humanities classroom than slideware already does.  I prefer in-class exercises built around techniques like think-write-pair-share.

    Multiple-Choice Testing for Comprehension, Not Recognition

    I’ve got another system I want to share here, which is admittedly imperfect. Indeed, I brought it up because I was hoping Cathy knew a better solution for big classes. She didn’t, so I’m posting it here because it’s the best thing I currently know of.

    It’s based on testing theory I read many years ago, and it seems to work in my large-lecture introduction to Communication Studies course.  It is a multiple choice system that tests for comprehension, rather than recognition.  As Derek Bruff explained in a 2010 ProfHacker post, multiple-choice works best when it operates at the conceptual level, rather than at the level of regurgitating facts. This works perfectly for me, since Intro to Communication Studies at McGill is largely concept-driven.

    A couple caveats are in order here: 1) students generally don’t like it. It looks like other multiple choice tests but it’s not, so skills that were well developed in years of standardized testing are rendered irrelevant. 2) multiple choice is only one axis of evaluation for the course, and as with Bruff’s final, multiple-choice makes up only part of the exam, with the other part being free-written short answers. Students must write and synthesize, and they are subject to pop quizzes, which they also dislike (except for a small subset that realizes a side-effect is they keep up with readings). On the syllabus, I am completely clear about which evaluation methods are coercive (those I use to make them keep up with the reading and material) and which are creative (where they must analyze, synthesize and make ideas their own).

    So, here’s my multiple choice final exam formula.

    Step 1: Make it semi-open book. Each student is allowed to bring in a single sheet of 8.5″ x 11” paper, double sided, single-layered (don’t ask). On that sheet, they can write anything they want, so long as it’s in their own handwriting. They must submit the sheet with the exam.

    The advantage of this method is it allows students to write down anything they have trouble memorizing, but it forces them to study and synthesize before they get to the moment of the test.  Even if they copy someone else, they still have to expend all that energy writing down the information.  And most students turn in very original, very intricate study guides.

    Step 2: Eliminate recognition as a factor in the test.

    Most multiple choice questions rely on recognition as the path to the right answer. You get a question stem, and then four or five answers, one of which will be right. Often, the right answer is something the student will recognize from the reading, while the wrong answers aren’t.

    But recognition isn’t the kind of thinking we want to test for. We want to test if the student understands the reading.

    The answer to this problem is simple: spend more time writing the wrong answers.

    Pretty much all my multiple choice exam questions take this form:

    Question stem.
    –> Right answer
    –> True statement from the same reading or a related reading, but that does not correctly answer the question
    –> Argument or position author rehearsed and dismissed; or that appears in another reading that contradicts the right answer.

    From here, you’re basically set, though I often add a 4th option that is “the common sense” answer (since people bring a lot of preconceptions to media studies), or I take the opportunity to crack a joke.

    Step 3: Give the students practice questions, and explain the system to them. I hide nothing. I tell them how I write the questions, why I write them the way I do, and what I expect of them. I even have them talk about what to write on their sheets of paper.  I use my university’s online courseware, which as Jason Jones explained in a 2010 ProfHacker post, takes the practice quiz out of class time, and lets students have multiple cracks at it as they get ready for the exam.

    A few other guidelines:

    Step 4 (optional): For the first time in 2012, I had students try to write questions themselves. Over the course of about 10 weeks, I had groups of 18 students write up and post questions on the discussion board (that follow the rules above) that pertained to readings or lectures from their assigned week. A large number of them were pretty good, so I edited them and added them to my question bank for the final exam. So for fall 2012, my COMS 210 students wrote about half the questions they were likely to encounter on the final. If they were exceptionally lucky, their own question might wind up on their own exam (we used 4 different forms for the final).

    Here are links to my syllabus and to a copy of the write your own multiple choice assignment (with the names removed).

    Caveats

    1. This is an imperfect system, but it’s the best I’ve found that combines an economy of labor, vigorous testing, analytical thinking (rather than recognition) and expansiveness—the students need to engage with all of the readings. It is certainly not, as Cathy says, a “boss task” – that’s the term paper.
    2. McGill undergraduates are generally very strong students.  This format, or the optional assignment, may be less appropriate for undergrad populations who don’t arrive at university “already very good at school.”
    3. The optional assignment was definitely more work than just writing new questions myself.  And not all the students will appreciate it (or that fact–though I only got one complaint out of 187 students).  It did seem to reduce test anxiety among the students I talked with, though, which is always a good thing.

    I think a lot about large-lecture pedagogy and I’d be delighted to hear from other profs—in any university field—who teach big classes and who find ways to nurture student learning and intense evaluation in an environment structured by limited resources and large numbers.

    Continued in article


    A Defense of the Multiple-Choice Exam ---
    http://www.chronicle.com/article/A-Defense-of-the/238098?cid=at&utm_source=at&utm_medium=en&elqTrackId=6c34011386bb4157bf32871f93fc6070&elq=58de49d36d48489c80569a3b1345dd98&elqaid=11172&elqat=1&elqCampaignId=4303

    Jensen Comment
    Assume that the test banks for textbooks have been compromised. You might be able to confuse your students by using a test bank of a competitor's textbook, but eventually students will catch on to what you are doing. Also test banks seldom have good multiple choice exam questions except when the questions have been adapted from CPA, CMA, or other certification examinations. But such adaptations increase the likelihood that students have access to archives of such questions.

    Another trick is to slightly reword the questions so as to change the answers. This, however, may become harder than writing your own questions from scratch.

    Also assume that the examinations, especially essay and case questions, you gave in previous terms are in student archives such as fraternity files.

    Since students are going to face multiple choice examinations on future GRE, GMAT, LSAT, CPA, CMA, and other examinations you can do them a favor by devoting time in a course teaching them how to take multiple choice examinations.

    Enter the phrase "How to take a multiple choice" at http://www.bing.com/

    Just after the Ice Age when I prepared to take the CPA examination there where no CPA coaching (vcr machines and computers had not yet been invented) materials like you can buy today. I mostly studied for the CPA examination by concentrating as best I could on former CPA examinations (that were available in hard copy in those days). By the say in addition to multiple choice questions there were essay questions and problems on CPA examinations even in those days. My lowest score was in the auditing part of the examination. I would never have passed that part if the grader and not given me credit for my essay answer that I crossed out. In those days you could take the CPA examination as a senior in college before you graduated. What a great feeling to graduate with that monkey off your back.

     


    Onsite Versus Online Differences for Faculty

    "U. of Phoenix Reports on Its Students' Academic Achievement," by Goldie Blumenstyk, Chronicle of Higher Education, June 5, 2008 --- http://chronicle.com/daily/2008/06/3115n.htm?utm_source=at&utm_medium=en

    The University of Phoenix is often derided by traditional academics for caring more about its bottom line than about academic quality, and every year, the annual report issued by its parent company focuses more on profits than student performance.

    The institution that has become the largest private university in North America is releasing its first "Annual Academic Report," which it will make available on its Web site today. The university's leaders say the findings show that its educational model is effective in helping students succeed in college, especially those who are underprepared.

    Freshmen at the University of Phoenix enter with reading, writing, and mathematical skills that are, on average, below those of other college students, but according to data from standardized tests, Phoenix students appear to improve in those skills at a greater rate than do students at other colleges.

    And in a comparison of students who enter college with "risk factors" that often contribute to their dropping out, Phoenix's rates of completion for a bachelor's degree were substantially higher than for institutions over all.

    William J. Pepicello, president of the 330,000-student university, said those and other findings shared in advance with The Chronicle show that the 32-year-old, open-access institution is fulfilling its goals.

    "This ties into our social mission for our university," said Mr. Pepicello, in an interview at the company's headquarters here. "We take these students and we do give them a significant increase in skills."

    Phoenix for years has been extensively measuring and monitoring student progress for internal purposes, using the data to change the content and design of its courses or to reshape its approach to remedial education.

    It decided to develop and publish this report—distinct from the financial reports that its parent company, the $2.6-billion Apollo Group Inc., regularly provides—as "a good-faith attempt on our part" to show the university's commitment to growing public demand for more accountability by institutions of higher education, said Mr. Pepicello.

    He and other university leaders fully expect some challenges to the findings, but they say the institution, by publishing the report, is showing its willingness to confront scrutiny of its educational record from within academe. "It lets us, in a public forum, talk to our colleagues about what we do and how well we do it," said Mr. Pepicello.

    The introduction this academic year of a test that could be administered to both campus-based and distance-education students—the Measure of Academic Proficiency and Progress exam by the Educational Testing Service—also made this kind of reporting possible, he said. Nearly two-thirds of Phoenix students attend online.

    Patrick M. Callan, president of the National Center for Public Policy and Higher Education, said that although he had not yet seen Phoenix's data, its decision to publish such a report was "a very positive development."

    He has urged colleges to be open in their reporting on themselves. Even if the university has chosen to release data that put it in the best light, as others often do, Mr. Callan said the report will be a significant piece of the national debate over what value an institution can add to a student.

    "For higher education, it is a positive and useful and constructive approach," Mr. Callan said. Publication of the report, he added, was in line with other efforts by the university "to be part of the discussion on the outcomes of higher education." Those efforts include the university's recent creation of a research center on adult learners (for which Mr. Callan is an unpaid adviser).

     

    A Mixed Report Card

    In the report, some of those outcomes look better than others.

    "It certainly is not perfect," said Mr. Pepicello of some of the test scores. "It is where we are."

    In its report, Phoenix shows the results from its 1,966 students who took the MAPP test this year, compared with the national sample of more than 376,000 students from about 300 institutions.

    The results show that in reading, critical thinking, and writing, its freshmen scored below those of the population over all, but the difference between those scores and those of its seniors was greater than for the population at large. The difference was more marked in mathematics, although the university's freshmen and seniors' scores were both notably lower than those of the whole test-taking pool.

    Bill Wynne, MAPP test product specialist, said that without knowing more about the makeup of the comparative samples and other information, he could not characterize the statistical significance of the gains the university was reporting, except that they were at least as good as those reported by the national cross section. "The magnitude of the change is in the eye of the beholder," he said.

    Mr. Pepicello said he wished the seniors' scores were higher, particularly in math, but he considered all of the findings positive because they indicated that students improve when they attend. "This doesn't embarrass me," he said. "This is really good information for us to really improve our institution."

    (Phoenix did not track the progress of individual students, but MAPP officials said the university's pool of freshmen and seniors taking the test was large enough and random enough to justify its using different groups of students for comparisons.)

    In another test, involving a smaller pool of students, the Phoenix students' "information literacy" skills for such tasks as evaluating sources and understanding economic, legal, and social issues were also comparable to or significantly higher than the mean scores in several categories. Adam Honea, the provost, said the findings from the Standardized Assessment of Information Literacy Skills test, developed at Kent State University, were important to the institution since "information literacy is a goal of ours."

    Continued in article

    Bob Jensen's threads on asynchronous learning are at http://faculty.trinity.edu/rjensen/255wp.htm
    Keep in mind that the University of Phoenix has a combination of onsite and online degree programs.

    Bob Jensen's threads on controversies of education technology and online learning are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Bob Jensen's threads on online training and education alternatives are at http://faculty.trinity.edu/rjensen/crossborder.htm

    Bob Jensen's threads on higher education controversies are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

    The Chronicle's Goldie Blumenstyk has covered distance education for more than a decade, and during that time she's written stories about the economics of for-profit education, the ways that online institutions market themselves, and the demise of the 50-percent rule. About the only thing she hadn't done, it seemed, was to take a course from an online university. But this spring she finally took the plunge, and now she has completed a class in government and nonprofit accounting through the University of Phoenix. She shares tales from the cy ber-classroom -- and her final grade -- in a podcast with Paul Fain, a Chronicle reporter.
    Chronicle of Higher Education, June 11, 2008 (Audio) --- http://chronicle.com/media/audio/v54/i40/cyber_classroom/

    ·         All course materials (including textbooks) online; No additional textbooks to purchase

    ·         $1,600 fee for the course and materials

    ·         Woman instructor with respectable academic credentials and experience in course content

    ·         Instructor had good communications with students and between students

    ·         Total of 14 quite dedicated online students in course, most of whom were mature with full-time day jobs

    ·         30% of grade from team projects

    ·         Many unassigned online helper tutorials that were not fully utilized by Goldie

    ·         Goldie earned a 92 (A-)

    ·         She gave a positive evaluation to the course and would gladly take other courses if she had the time

    ·         She considered the course to have a heavy workload


    "The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning," (Not Free), Chronicle of Higher Education, November 2011 ---
    https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc

    The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning. As online learning spreads throughout higher education, so have calls for quality control and assessment. Accrediting groups are scrambling to keep up, and Congress and government officials continue to scrutinize the high student-loan default rates and aggressive recruiting tactics of some for-profit, mostly online colleges. But the push for accountability isn't coming just from outside. More colleges are looking inward, conducting their own self-examinations into what works and what doesn't.

    Also in this year's report:
     
    • Strategies for teaching and doing research online
    • Members of the U.S. military are taking online courses while serving in Afghanistan
    • Community colleges are using online technology to keep an eye on at-risk students and help them understand their own learning style
    • The push to determine what students learn online, not just how much time they spend in class
    • Presidents' views on e-learning

    Bob Jensen's threads on asynchronous learning ---
    http://faculty.trinity.edu/rjensen/255wp.htm

    Bob Jensen's threads on online course and degree programs ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm


    Soaring Popularity of E-Learning Among Students But Not Faculty
    How many U.S. students took at least on online course from a legitimate college in Fall 2005?

    More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education . . . ‘We didn’t become faculty to sit in front of a computer screen,’
    Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed, November 10, 2006 --- http://www.insidehighered.com/news/2006/11/10/online

    More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education.

    Roughly 3.2 million students took at least one online course from a degree-granting institution during the fall 2005 term, the Sloan Consortium said. That’s double the number who reported doing so in 2002, the first year the group collected data, and more than 800,000 above the 2004 total. While the number of online course participants has increased each year, the rate of growth slowed from 2003 to 2004.

    The report, a joint partnership between the group and the College Board, defines online courses as those in which 80 percent of the content is delivered via the Internet.

    The Sloan Survey of Online Learning, “Making the Grade: Online Education in the United States, 2006,” shows that 62 percent of chief academic officers say that the learning outcomes in online education are now “as good as or superior to face-to-face instruction,” and nearly 6 in 10 agree that e-learning is “critical to the long-term strategy of their institution.” Both numbers are up from a year ago.

    Researchers at the Sloan Consortium, which is administered through Babson College and Franklin W. Olin College of Engineering, received responses from officials at more than 2,200 colleges and universities across the country. (The report makes few references to for-profit colleges, a force in the online market, in part because of a lack of survey responses from those institutions.)

    Much of the report is hardly surprising. The bulk of online students are adult or “nontraditional” learners, and more than 70 percent of those surveyed said online education reaches students not served by face-to-face programs.

    What stands out is the number of faculty who still don’t see e-learning as a valuable tool. Only about one in four academic leaders said that their faculty members “accept the value and legitimacy of online education,” the survey shows. That number has remained steady throughout the four surveys. Private nonprofit colleges were the least accepting — about one in five faculty members reported seeing value in the programs.

    Elaine Allen, co-author of the report and a Babson associate professor of statistics and entrepreneurship, said those numbers are striking.

    “As a faculty member, I read that response as, ‘We didn’t become faculty to sit in front of a computer screen,’ ” Allen said. “It’s a very hard adjustment. We sat in lectures for an hour when we were students, but there’s a paradigm shift in how people learn.”

    Barbara Macaulay, chief academic officer at UMass Online, which offers programs through the University of Massachusetts, said nearly all faculty members teaching the online classes there also teach face-to-face courses, enabling them to see where an online class could fill in the gap (for instance, serving a student who is hesitant to speak up in class).

    She said she isn’t surprised to see data illustrating the growing popularity of online courses with students, because her program has seen rapid growth in the last year. Roughly 24,000 students are enrolled in online degree and certificate courses through the university this fall — a 23 percent increase from a year ago, she said.

    “Undergraduates see it as a way to complete their degrees — it gives them more flexibility,” Macaulay said.

    The Sloan report shows that about 80 percent of students taking online courses are at the undergraduate level. About half are taking online courses through community colleges and 13 percent through doctoral and research universities, according to the survey.

    Nearly all institutions with total enrollments exceeding 15,000 students have some online offerings, and about two-thirds of them have fully online programs, compared with about one in six at the smallest institutions (those with 1,500 students or fewer), the report notes. Allen said private nonprofit colleges are often set in enrollment totals and not looking to expand into the online market.

    The report indicates that two-year colleges are particularly willing to be involved in online learning.

    “Our institutions tend to embrace changes a little more readily and try different pedagogical styles,” said Kent Phillippe, a senior research associate at the American Association of Community Colleges. The report cites a few barriers to what it calls the “widespread adoption of online learning,” chief among them the concern among college officials that some of their students lack the discipline to succeed in an online setting. Nearly two-thirds of survey respondents defined that as a barrier.

    Allen, the report’s co-author, said she thinks that issue arises mostly in classes in which work can be turned in at any time and lectures can be accessed at all hours. “If you are holding class in real time, there tends to be less attrition,” she said. The report doesn’t differentiate between the live and non-live online courses, but Allen said she plans to include that in next year’s edition.

    Few survey respondents said acceptance of online degrees by potential employers was a critical barrier — although liberal arts college officials were more apt to see it as an issue.

    November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

    Hi Bob:

    One reason why might be what I have seen. The in residence accounting students that I talk with take online classes here because they are EASY and do not take much work. This would be very popular with students but not generally so with faculty.

    John

    November 10, 2006 reply from Bob Jensen

    Hi John,

    Then there is a quality control problem whereever this is a fact. It would be a travesty if any respected college had two or more categories of academic standards or faculty assignments.

    Variations in academic standards have long been a problem between part-time versus full-time faculty, although grade inflation can be higher or lower among part-time faculty. In one instance, it’s the tenure-track faculty who give higher grades because they're often more worried about student evaluations. At the opposite extreme it is part-time faculty who give higher grades for many reasons that we can think of if we think about it.

    One thing that I'm dead certain about is that highly motivated students tend to do better in online courses ceteris paribus. Reasons are mainly that time is used more efficiently in getting to class (no wasted time driving or walking to class), less wasted time getting teammates together on team projects, and fewer reasons for missing class.

    Also online alternatives offer some key advantages for certain types of handicapped students --- http://faculty.trinity.edu/rjensen/000aaa/thetools.htm 

    My opinions on learning advantages of E-Learning were heavily influenced by the most extensive and respected study of online versus onsite learning experiments in the SCALE experiments using full-time resident students at the University of Illinois --- http://faculty.trinity.edu/rjensen/255wp.htm#Illinois 

    In the SCALE experiments cutting across 30 disciplines, it was generally found that motivated students learned better online then their onsite counterparts having the same instructors. However, there was no significant impact on students who got low grades in online versus onsite treatment groups.

    I think the main problem with faculty is that online teaching tends to burn out instructors more frequently than onsite instructors. This was also evident in the SCALE experiments. When done correctly, online courses are more communication intent between instructors and faculty. Also, online learning takes more preparation time if it is done correctly. 

    My hero for online learning is still Amy Dunbar who maintains high standards for everything:

    http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm

    http://faculty.trinity.edu/rjensen/book01q4.htm#Dunbar

    Bob Jensen

    November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

    Hi Bob:

    Also why many times it is not done 'right'. Not done right they do not get the same education. Students generally do not complain about getting 'less for their money'. Since we do not do online classes in department the ones the students are taking are the university required general education and our students in particular are not unhappy with being shortchanged in that area as they frequently would have preferred none anyway.

    John

     

    Bob Jensen's threads on open sharing and education technology are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Bob Jensen's threads on online training and education alternatives are at http://faculty.trinity.edu/rjensen/crossborder.htm

    Motivations for Distance Learning --- http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#Motivations

    Bob Jensen's threads on the dark side of online learning and teaching are at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm


    Question
    Why should teaching a course online take twice as much time as teaching it onsite?

    Answer
    Introduction to Economics:  Experiences of teaching this course online versus onsite

    With a growing number of courses offered online and degrees offered through the Internet, there is a considerable interest in online education, particularly as it relates to the quality of online instruction. The major concerns are centering on the following questions: What will be the new role for instructors in online education? How will students' learning outcomes be assured and improved in online learning environment? How will effective communication and interaction be established with students in the absence of face-to-face instruction? How will instructors motivate students to learn in the online learning environment? This paper will examine new challenges and barriers for online instructors, highlight major themes prevalent in the literature related to “quality control or assurance” in online education, and provide practical strategies for instructors to design and deliver effective online instruction. Recommendations will be made on how to prepare instructors for quality online instruction.
    Yi Yang and Linda F. Cornelious, "Preparing Instructors for Quality Online Instruction, Working Paper --- http://www.westga.edu/%7Edistance/ojdla/spring81/yang81.htm

    Jensen Comment:  The bottom line is that teaching the course online took twice as much time because "largely from increased student contact and individualized instruction and not from the use of technology per se."

    Online teaching is more likely to result in instructor burnout.  These and other issues are discussed in my "dark side" paper at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm 

    April 1, 2005 message from Carolyn Kotlas [kotlas@email.unc.edu]

    COMPUTERS IN THE CLASSROOM AND OPEN BOOK EXAMS

    In "PCs in the Classroom & Open Book Exams" (UBIQUITY, vol. 6, issue 9, March 15-22, 2005), Evan Golub asks and supplies some answers to questions regarding open-book/open-note exams. When classroom computer use is allowed and encouraged, how can instructors secure the open-book exam environment? How can cheating be minimized when students are allowed Internet access during open-book exams? Golub's suggested solutions are available online at
    http://www.acm.org/ubiquity/views/v6i9_golub.html

    Ubiquity is a free, Web-based publication of the Association for Computing Machinery (ACM), "dedicated to fostering critical analysis and in-depth commentary on issues relating to the nature, constitution, structure, science, engineering, technology, practices, and paradigms of the IT profession." For more information, contact: Ubiquity, email: ubiquity@acm.org ; Web: http://www.acm.org/ubiquity/ 

    For more information on the ACM, contact: ACM, One Astor Plaza, 1515 Broadway, New York, NY 10036, USA; tel: 800-342-6626 or 212-626-0500; Web: http://www.acm.org/


    NEW EDUCAUSE E-BOOK ON THE NET GENERATION

    EDUCATING THE NET GENERATION, a new EDUCAUSE e-book of essays edited by Diana G. Oblinger and James L. Oblinger, "explores the Net Gen and the implications for institutions in areas such as teaching, service, learning space design, faculty development, and curriculum." Essays include: "Technology and Learning Expectations of the Net Generation;" "Using Technology as a Learning Tool, Not Just the Cool New Thing;" "Curricula Designed to Meet 21st-Century Expectations;" "Faculty Development for the Net Generation;" and "Net Generation Students and Libraries." The entire book is available online at no cost at http://www.educause.edu/educatingthenetgen/ .

    EDUCAUSE is a nonprofit association whose mission is to advance higher education by promoting the intelligent use of information technology. For more information, contact: Educause, 4772 Walnut Street, Suite 206, Boulder, CO 80301-2538 USA; tel: 303-449-4430; fax: 303-440-0461; email: info@educause.edu;  Web: http://www.educause.edu/

    See also:

    GROWING UP DIGITAL: THE RISE OF THE NET GENERATION by Don Tapscott McGraw-Hill, 1999; ISBN: 0-07-063361-4 http://www.growingupdigital.com/


    EFFECTIVE E-LEARNING DESIGN

    "The unpredictability of the student context and the mediated relationship with the student require careful attention by the educational designer to details which might otherwise be managed by the teacher at the time of instruction." In "Elements of Effective e-Learning Design" (INTERNATIONAL REVIEW OF RESEARCH IN OPEN AND DISTANCE LEARNING, March 2005) Andrew R. Brown and Bradley D. Voltz cover six elements of effective design that can help create effective e-learning delivery. Drawing upon examples from The Le@rning Federation, an initiative of state and federal governments of Australia and New Zealand, they discuss lesson planning, instructional design, creative writing, and software specification. The paper is available online at http://www.irrodl.org/content/v6.1/brown_voltz.html 

    International Review of Research in Open and Distance Learning (IRRODL) [ISSN 1492-3831] is a free, refereed ejournal published by Athabasca University - Canada's Open University. For more information, contact Paula Smith, IRRODL Managing Editor; tel: 780-675-6810; fax: 780-675-672; email: irrodl@athabascau.ca ; Web: http://www.irrodl.org/

    The Le@rning Federation (TLF) is an "initiative designed to create online curriculum materials and the necessary infrastructure to ensure that teachers and students in Australia and New Zealand can use these materials to widen and enhance their learning experiences in the classroom." For more information, see http://www.thelearningfederation.edu.au/


    RECOMMENDED READING

    "Recommended Reading" lists items that have been recommended to me or that Infobits readers have found particularly interesting and/or useful, including books, articles, and websites published by Infobits subscribers. Send your recommendations to carolyn_kotlas@unc.ed u for possible inclusion in this column.

    Author Clark Aldrich recommends his new book:

    LEARNING BY DOING: A COMPREHENSIVE GUIDE TO SIMULATIONS, COMPUTER GAMES, AND PEDAGOGY IN E-LEARNING AND OTHER EDUCATIONAL EXPERIENCES Wiley, April 2005 ISBN: 0-7879-7735-7 hardcover $60.00 (US)

    Description from Wiley website:

    "Designed for learning professionals and drawing on both game creators and instructional designers, Learning by Doing explains how to select, research, build, sell, deploy, and measure the right type of educational simulation for the right situation. It covers simple approaches that use basic or no technology through projects on the scale of computer games and flight simulators. The book role models content as well, written accessibly with humor, precision, interactivity, and lots of pictures. Many will also find it a useful tool to improve communication between themselves and their customers, employees, sponsors, and colleagues."

    The table of contents and some excerpts are available at http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787977357.html

    Aldrich is also author of SIMULATIONS AND THE FUTURE OF LEARNING: AN INNOVATIVE (AND PERHAPS REVOLUTIONARY) APPROACH TO E-LEARNING. See http://www.wiley.com/WileyCDA/WileyTitle/productCd-0787969621.html  for more information or to request an evaluation copy of this title.

    Also see
    Looking at Learning….Again, Part 2
    --- http://www.learner.org/resources/series114.html 

    Bob Jensen's documents on education technology are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    More on this topic appears in the module below.


    "Far From Honorable," by Steve Kolowich, Inside Higher Ed, October 25, 2011 ---
    http://www.insidehighered.com/news/2011/10/25/online-students-might-feel-less-accountable-honor-codes

    Much of the urgency around creating a “sense of community” in online courses springs from a desire to keep online students from dropping out. But a recent paper suggests that strengthening a sense of social belonging among online students might help universities fight another problem: cheating.

    In a series of experiments, researchers at Ohio University found that students in fully online psychology courses who signed an honor code promising not to cheat broke that pledge at a significantly higher rate than did students in a “blended” course that took place primarily in a classroom.

    “The more distant students are, the more disconnected they feel, and the more likely it is that they’ll rationalize cheating,” Frank M. LoSchiavo, one of the authors, conjectured in an interview with Inside Higher Ed.

    While acknowledging the limitations inherent to a study with such a narrow sample, and the fact that motivations are particularly hard to pin down when it comes to cheating, LoSchiavo and Mark A. Shatz, both psychology professors at Ohio University's Zanesville campus, said their findings may indicate that meeting face-to-face with peers and professors confers a stronger sense of accountability among students. “Honor codes,” LoSchiavo said, “are more effective when there are [strong] social connections.”

    Honor codes are not, of course, the only method of deterring cheating in online courses. The proliferation of online programs has given rise to a cottage industry of remote proctoring technology, including one product that takes periodic fingerprint readings while monitoring a student’s test-taking environment with a 360-degree camera. (A 2010 survey by the Campus Computing Project suggests that a minority of institutions authenticate the identities of online students as a rule.)

    But LoSchiavo said that he and Shatz were more interested in finding out whether honor codes held any sway online. If so, then online instructors might add pledges to their arsenal of anti-cheating tools, LoSchiavo said. If not, it provides yet an intriguing contribution to the discussion about student engagement and “perceived social distance” in the online environment.

    They experimented with the effectiveness of honor codes in three introductory psychology courses at Ohio University. The first course had 40 students and was completely online. These students, like those in subsequent trials, were a mix of traditional-age and adult students, mostly from regional campuses in the Ohio University system. There was no honor code. Over the course of the term, the students took 14 multiple-choice quizzes with no proctoring of any kind. At the end of the term, 73 percent of the students admitted to cheating on at least one of them.

    The second trial involved another fully online introductory course in the same subject. LoSchiavo and Shatz divided the class evenly into two groups of 42 students, and imposed an honor code -- posted online with the other course materials -- to one group but not the other. The students “digitally signed the code during the first week of the term, prior to completing any assignments.” The definition of cheating was the same as in the first trial: no notes, no textbooks, no Internet, no family or friends. There was no significant difference in the self-reported cheating between the two groups.

    In a third trial, the professors repeated the experiment with 165 undergraduates in a “blended” course, where only 20 percent of the course was administered online and 80 percent in a traditional classroom setting. Again, they split the students into two groups: one in which they were asked to sign an honor code, and another in which they were not.

    This time, when LoSchiavo and Shatz surveyed the students at the end of the term, there was a significant difference: Students who promised not to cheat were about 25 percent less likely to cheat than were those who made no such promise. Among the students who had not signed the code, 82 percent admitted to cheating.

    LoSchiavo concedes that this study offers no definitive answers on the question of whether students are more likely to cheat in fully online courses. Cheating is more often than not a crime of opportunity, and containing integrity violations probably has much more to do with designing a system that limits the opportunities to cheat and gives relatively little weight to those assignments for which cheating is hardest to police.

    “The bottom line is that if there are opportunities, students will cheat,” he said. “And the more opportunities they have, the more cheating there will be, and it is incumbent upon professors to put in a system that, when it’s important, cheating will be contained.”

    Continued in article

    Jensen Comment
    I think universities like Trinity University that expanded their honor codes to include student courts are generally happy with the operations of those honor codes. However, Trinity has only full time students and no distance education courses.

    One thing that I hated giving up was grading control. For most of my teaching career I gave F grades to students who seriously cheated in my courses. Under the revised Trinity Honor Code, instructors can no longer control the granting of F grades for cheating.

    When I was a student at Stanford the Honor Code included a pledge to report cheating of other students. I think most universities have watered down this aspect of their honor codes because, in this greatly increased era of litigation, student whistle blowers can be sued big time. Universities may continue to encourage such whistle blowing, but they no longer make students sign pledges that on their honor they will be whistleblowers if they do not want to bear the risk of litigation by students they report.

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm


    "Nationally Recognized Assessment and Higher Education Study Center Findings as Resources for Assessment Projects," by Tracey Sutherland, Accounting Education News, 2007 Winter Issue, pp. 5-7

    While nearly all accounting programs are wrestling with various kinds of assessment initiatives to meet local assessment plans and/or accreditation needs, most colleges and universities participate in larger assessment projects whose results may not be shared at the College/School level. There may be information available on your campus through campus-level assessment and institutional research that generate data that could be useful for your accounting program/school assessment initiatives. Below are examples of three such research projects, and some of their recent findings about college students.

    Some things in the The 2006 Report of the National Survey of Student Engagement especially caught my eye:

    Promising Findings from the National Surveyof Student Engagement

    • Student engagement is positively related to first-year and senior student grades and to persistence between the first and second year of college.

    • Student engagement has compensatory effects on grades andpersistence of students from historically underserved backgrounds.

    • Compared with campus-basedstudents, distance education learners reported higher levels ofacademic challenge, engaged more often in deep learning activities, and reported greater developmental gains from college.

    • Part-time working students reported grades comparable to other students and also perceived the campus to be as supportive of their academic and social needs as theirnon-working peers.

    • Four out of five beginning college students expected that reflective learning activities would be an important part of their first-year experience.

    Disappointing Findings from the National

    Survey of Student Engagement

    • Students spend on average only about 13–14 hours a week preparingfor class, far below what faculty members say is necessary to do well in their classes.

    • Students study less during the first year of college than they expected to at the start of the academic year.

    • Women are less likely than men to interact with faculty members outside of class including doing research with a faculty member.

    • Distance education students are less involved in active and collaborative learning.

    • Adult learners were much lesslikely to have participated in such enriching educational activities as community service, foreign language study, a culminating senior experience, research with faculty,and co-curricular activities.

    • Compared with other students, part-time students who are working had less contact with facultyand participated less in active and collaborative learning activities and enriching educational experiences.

    Some additional 2006 NSSE findings

    • Distance education studentsreported higher levels of academic challenge, and reported engaging more often in deep learning activities such as the reflective learning activities. They also reported participating less in collaborative learning experiences and worked more hours off campus.

    • Women students are more likely to be engaged in foreign language coursework.

    • Male students spent more time engaged in working with classmates on projects outside of class.

    • Almost half (46%) of adult students were working more than 30 hours per week and about three-fourths were caring for dependents. In contrast, only 3% of traditional age students worked more than 30 hours per week, and about four fifths spend no time caring for dependents.

    Bob Jensen's threads on higher education controversies are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm


    Students Reviewing Each Others' Projects

    January 30, 2009 message from David Fordham, James Madison University [fordhadr@JMU.EDU]

    I teach an MBA section of "Introduction to Information Security". One of the course requirements is an Information Security Policy Manual for a hypothetical company. Students submit their manuals electronically, with the only identifying information being their name as the title of the file. I strip off all other identifying information (Tools-Options, File-Properties, etc.) from the document and change the name of the file to "Student 1" "Student 2" etc.

    Then, I distribute the file to two other students for blind review.

    In reality, each author receives THREE (3) reviews, because I myself provide a review, in addition to the two students. I do NOT identify the reviewers, either, so the author gets three reviews, but does not know which one is mine and which are the other two student reviews. Two are blind, and one is mine, but all the student gets is "review 1", "review 2", and "review 3". I am NOT always "review 3".

    This has proven to be very effective. Each student gets to actually SEE two other students' work up close and personal and has to put thought into evaluating it, and in so doing, can compare their peers' work to their own. Plus, each student then gets three reviews from three other individuals, making a total of FIVE (5) different perspectives which to compare with their own.

    This "reviewed" submission is the "mid-term" submission. The students then have the option (all of them take it!) to revise their manual if they wish for the final submission. The quality of the final product is day-and-night difference from what I used to get: truly professional level work. Hence, I'm a believer in the system.

    (Plus, I can rage all I want in my review of the first submission if its really bad, and the student doesn't know it's me!)

    Incidentally, part of the course grade is how well they review their two assigned manuals... I expect good comments, constructive criticism, useful suggestions, etc. Because the students are all in the executive MBA program, and because this approach is novel, I usually get some really good participation and high-quality reviews.

    No, it doesn't save me a lot of time, since I still personally "grade" (e.g., do a review of) each submission. But I'm doing it to save time, I'm doing it because it gives high value to the student. I can, however, easily see where peer review would be a fantastic time-saver when a professor gives lengthy assignments to large numbers of students.

    David Fordham
    JMU

    The inmates are running the asylum
    From Duke University:  One of the Most Irresponsible Grading Systems in the World

    Her approach? "So, this year, when I teach 'This Is Your Brain on the Internet,' I'm trying out a new point system. Do all the work, you get an A. Don't need an A? Don't have time to do all the work? No problem. You can aim for and earn a B. There will be a chart. You do the assignment satisfactorily, you get the points. Add up the points, there's your grade. Clearcut. No guesswork. No second-guessing 'what the prof wants.' No gaming the system. Clearcut. Student is responsible." That still leaves the question of determining whether students have done the work. Here again, Davidson plans to rely on students. "Since I already have structured my seminar (it worked brilliantly last year) so that two students lead us in every class, they can now also read all the class blogs (as they used to) and pass judgment on whether they are satisfactory. Thumbs up, thumbs down," she writes.
    Scott Jaschik, "Getting Out of Grading," Inside Higher Education,  August 3, 2009
    Jensen Comment
    No mention of how Professor Davidson investigates and punishes plagiarism and other easy ways to cheat in this system. My guess is that she leaves it up to the students to police themselves any way they like. One way to cheat is simply hire another student to do the assignment. With no examinations in a controlled setting, who knows who is doing whose work?

    It is fairly common for professors use grading inputs when students evaluate each others' term projects, but this is the first time I ever heard of turning the entire grading process (with no examinations) over to students in the class. Read about how David Fordham has students evaluate term projects at http://faculty.trinity.edu/rjensen/assess.htm#StudentPeerReview

    August 4, 2009 reply from David Fordham, James Madison University [fordhadr@JMU.EDU]

    Bob, While I feel the way you do about it, it is interesting to note that this type of thing isn't new.

    In the fall semester of 1973, at the North Campus of what today is the Florida State College in Jacksonville (formerly FCCJ, and when I was going there it was called FJC), I enrolled in a sophomore-level psychology class taught by Dr. Pat Greene. The very first day, Dr. Greene handed out a list of 30 assignments. Each assignment was independent study, and consisted of viewing a 15 to 60 minute video/filmstrip/movie/etc. in the library, or reading a chapter in the textbook, followed by completion of a 1 to 3 page "worksheet" covering the major concepts covered in the "lesson".

    As I recall, the worksheet was essentially a set of fill-in-the-blank questions. It was open book, open note, open anything, and when you completed the worksheet, you put your name on it and dropped it in Dr. Greene's mailbox in the faculty offices lobby at your convenience.

    The first 10 assignments were required in order to pass the course, but students could pick and choose from the remainder. If you stopped after the 10 required assignments, you got a D in the class. If you did 15 assignments, you got a C; 20 a B, and if you completed all 30, you got an A in the class. Students could pick which lessons to complete (after the first 10) if they elected not to do all 30.

    This was before email, YouTube, and PDF's. Students worked at their own pace, there was no class meeting whatsoever after that first day. After the first day of class where I received the syllabus and assignment sheet, I never attended the classroom again. Dr. Greene supposedly held office hours during class time for students who wanted to ask questions, but I never needed it (nor did anyone else I knew of) because the assignments were so simple and easy, especially since they were open book, open note, and there was no time limit! There was no deadline, either, you could take till the end of the semester if you wanted to.

    Oh, and no exams, either.

    This was also before FERPA. Dr. Greene had a roll taped to his office door with all students' names on it. It was a manual spreadsheet, and as you turned in assignments, you got check marks beside your name in the columns showing which assignments you had "completed". We never got any of the assignments back, but supposedly if an assignment had too many errors, the student would get a dash mark instead of a check mark, indicating the need to do it over again.

    Within 2 weeks, I had completed all 30 assignments, got my A, and never saw Dr. Greene again. I learned at lot about psychology (everything from Maslow's Hierarchy to Pavlov's slobbering dogs, from the (now infamous) Hawthorne Effect to the impact of color on emotions), so I guess the class was a success. But what astounded me was that so many of my classmates quit after earning the B. The idea of having to do half-again as much work for an A compared to a B was apparently just too much for most of my classmates, because when I (out of curiosity) stopped by his office at the end of the semester, I was blown away by the fact that only a couple of us had A's, whereby almost everyone else had the B (and a couple had C's, again to my astonishment). I can't remember if there were any D's or F's.

    At the time, I was new to the college environment, and in my conversations with other faculty members, I discovered that professors enjoyed something called "academic freedom", and none of my other professors seemed to have any problem with what Dr. Greene was doing. In later years, it occurred to me that perhaps we were guinea-pigs for a psychology study he was doing on motivation. But since he was still using this method six years later for my younger sister (and using the same videos, films, and filmstrips!), I have my doubts.

    Dr. Greene was a professor for many, many years. Perhaps he was ahead of his time, with today's camtasia and snag-it and you-tube recordings... None of his assigned work was his own, it was all produced by professional producers, with the exception of his worksheets, which were all the "purple plague" spirit-duplicator handouts.

    I've often wondered how much more, if any, I could have learned if he'd really met with the class and actually tried to teach. But then again, as I took later psychology classes as part of my management undergrad (org behavior, supervision, human relations, etc.) I was pleased with how much I had learned in Dr. Greene's class, so I guess it wasn't a complete waste of time. Many of my friends who were in his class with me found the videos and filmstrips a nice break from the dry lectures of some of our other profs at the time. Plus, we liked the independent-study convenience. Oh, well...

    Bottom line: this type of thing isn't new: 1973 was 35 years ago. Since academic freedom is still around, it doesn't surprise me that Dr. Greene's teaching (and in this case, his grading) style is still around too.

    David Fordham
    James Madison University

    Bob Jensen's threads on cheating are at http://faculty.trinity.edu/rjensen/plagiarism.htm


    Online Versus Onsite for Students

    August 25, 2009 message from

    A lot of the face-to-face students I talk with like online classes BECAUSE THEY ARE EASY. While it is very possible to have a good solid online class (as evidenced my several on this listserve) my perception is that an awful lot of them out there are not. Students can load up with 21+ hours and work fulltime and still have a good GPA.

    John

     

    August 26, 2009 reply from Bob Jensen

    Hi John,

    I would not say that out loud to Amy Dunbar or Denny Beresford that they’re easy graders ---
    http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm

    I would not say that out loud to the graduates of two principles of accounting weed out courses year after year at Brigham Young University where classes meet on relatively rare occasion for inspiration about accountancy but not technical learning --- http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo

    Try to tell the graduates of Stanford University’s ADEPT Masters of Electrical Engineering program that they had an easier time of it because the entire program is online.

    There’s an interesting article entitled how researchers misconstrue causality:
    Like elaborately plumed birds … we preen and strut and display our t-values.” That was Edward Leamer’s uncharitable description of his profession in 1983.
    “Cause and Effect:  Instrumental variable help to isolate causal relationships, but they can be taken too far,” The Economist, August 15-21, 20098 Page 68.

    It is often the case that distance education courses are taught by non-tenured faculty, and non-tenured faculty may be easier with respect to grading than regular faculty because they are even more in need of strong teaching evaluations to not lose their jobs. The problem may have nothing whatsoever to do with online versus onsite education.

    I think it is very rewarding to look at grading in formal studies using the same full-time faculty teaching sections of online versus onsite students. By formal study, I mean using the same instructors, the same materials, and essentially the same examinations. The major five-year, multimillion dollar study that first caught my eye was the SCALE experiments on the campus of the University of Illinois where 30 courses from various disciplines were examined over a five year experiment.

     Yes the SCALE experiments showed that some students got higher grades online, notably B students who became A students and C students who became A students. The online pedagogy tended to have no effect on D and F students --- http://faculty.trinity.edu/rjensen/255wp.htm#Illinois

    Listen to Dan Stone’s audio about the SCALE Experiments --- http://www.cs.trinity.edu/~rjensen/000cpe/00start.htm

    But keep in mind that in the SCALE experiments, the same instructor of a course was grading both the online and onsite sections of the same course. The reason was not likely to be that online sections were easier. The SCALE experiments collected a lot of data pointing to more intense communications with instructors and more efficient use of student’s time that is often wasted in going to classes.

    The students in the experiment were full time on campus students, such that the confounding problems of having adult part-time students was not a factor in the SCALE experiments of online, asynchronous learning.

     A Statement About Why the SCALE Experiments Were Funded
    ALN = Asynchronous Learning
    We are particularly interested in new outcomes that may be possible through ALN. Asynchronous computer networks have the potential to
    improve contact with faculty, perhaps making self-paced learning a realizable goal for some off- and on-campus students. For example, a motivated student could progress more rapidly toward a degree. Students who are motivated but find they cannot keep up the pace, may be able to slow down and take longer to complete a degree, and not just drop out in frustration. So we are interested in what impact ALN will have on outcomes such as time-to-degree and student retention. There are many opportunities where ALN may contribute to another outcome: lowering the cost of education, e.g., by naturally introducing new values for old measures such as student-faculty ratios. A different kind of outcome for learners who are juggling work and family responsibilities, would be to be able to earn a degree or certification at home. This latter is a special focus for us.

    Alfred P. Sloan Foundation's Program in
    Learning Outside the Classroom at 
    http://w3.scale.uiuc.edu/scale/
     

    Another study that I love to point to was funded by the Chronicle of Higher Education. Read about when one of the Chronicle’s senior editors took a Governmental Accounting Course at the University of Phoenix during which the instructor of the course had no idea that Goldie Blumenstyk was assessing how difficult or how easy the course was for students in general. I think Goldie’s audio report of her experience is still available from the Chronicle of Higher Education. Goldie came away from the course exhausted.

    "U. of Phoenix Reports on Its Students' Academic Achievement," by Goldie Blumenstyk, Chronicle of Higher Education, June 5, 2008 --- http://chronicle.com/daily/2008/06/3115n.htm?utm_source=at&utm_medium=en

    The Chronicle's Goldie Blumenstyk has covered distance education for more than a decade, and during that time she's written stories about the economics of for-profit education, the ways that online institutions market themselves, and the demise of the 50-percent rule. About the only thing she hadn't done, it seemed, was to take a course from an online university. But this spring she finally took the plunge, and now she has completed a class in government and nonprofit accounting through the University of Phoenix. She shares tales from the cy ber-classroom -- and her final grade -- in a podcast with Paul Fain, a Chronicle reporter.
    Chronicle of Higher Education, June 11, 2008 (Audio) --- http://chronicle.com/media/audio/v54/i40/cyber_classroom/

    ·         All course materials (including textbooks) online; No additional textbooks to purchase

    ·         $1,600 fee for the course and materials

    ·         Woman instructor with respectable academic credentials and experience in course content

    ·         Instructor had good communications with students and between students

    ·         Total of 14 quite dedicated online students in course, most of whom were mature with full-time day jobs

    ·         30% of grade from team projects

    ·         Many unassigned online helper tutorials that were not fully utilized by Goldie

    ·         Goldie earned a 92 (A-)

    ·         She gave a positive evaluation to the course and would gladly take other courses if she had the time

    ·         She considered the course to have a heavy workload

    The best place to begin searching for research on ALN learning is at
    http://www.sloan-c.org/publications/jaln/index.asp

    Bob Jensen's threads on assessment are at http://faculty.trinity.edu/rjensen/assess.htm

    Bob Jensen’s threads on the dark side of online education and distance education in general can be found at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm

    Bob Jensen's threads on asynchronous learning are at http://faculty.trinity.edu/rjensen/255wp.htm
    Keep in mind that the University of Phoenix has a combination of onsite and online degree programs.

    Bob Jensen's threads on controversies of education technology and online learning are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Bob Jensen's threads on online training and education alternatives are at http://faculty.trinity.edu/rjensen/crossborder.htm

    Bob Jensen's threads on higher education controversies are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

     


    "Study: Little Difference in Learning in Online and In-Class Science Courses," Inside Higher Ed, October 22, 2012 ---
    http://www.insidehighered.com/quicktakes/2012/10/22/study-little-difference-learning-online-and-class-science-courses

    A study in Colorado has found little difference in the learning of students in online or in-person introductory science courses. The study tracked community college students who took science courses online and in traditional classes, and who then went on to four-year universities in the state. Upon transferring, the students in the two groups performed equally well. Some science faculty members have expressed skepticism about the ability of online students in science, due to the lack of group laboratory opportunities, but the programs in Colorado work with companies to provide home kits so that online students can have a lab experience.
     

     

    Jensen Comment
    Firstly, note that online courses are not necessarily mass education (MOOC) styled courses. The student-student and student-faculty interactions can be greater online than onsite. For example, my daughter's introductory chemistry class at the University of Texas had over 600 students. On the date of the final examination he'd never met her and had zero control over her final grade. On the other hand, her microbiology instructor in a graduate course at the University of Maine became her husband over 20 years ago.

    Another factor is networking. For example, Harvard Business School students meeting face-to-face in courses bond in life-long networks that may be stronger than for students who've never established networks via classes, dining halls, volley ball games, softball games, rowing on the Charles River, etc. There's more to lerning than is typically tested in competency examinations.

    My point is that there are many externalities to both onsite and online learning. And concluding that there's "little difference in learning" depends upon what you mean by learning. The SCALE experiments at the University of Illinois found that students having the same instructor tended to do slightly better than onsite students. This is partly because there are fewer logistical time wasters in online learning. The effect becomes larger for off-campus students where commuting time (as in Mexico City) can take hours going to and from campus.
    http://faculty.trinity.edu/rjensen/255wp.htm


    An Online Learning Experiment Overwhelms the University of Southern California
    "An Experiment Takes Off," by Doug Lederman, Inside Higher Ed, October 7, 2009 ---
    http://www.insidehighered.com/news/2009/10/07/uscmat# 

    When Karen Symms Gallagher ran into fellow education deans last year, many of them were "politely skeptical," the University of Southern California dean says (politely), about her institution's experiment to take its master's program in teaching online.

    Many of them seemed to appreciate Gallagher's argument that the traditional model of teacher education programs had largely failed to produce the many more top-notch teachers that California (and so many other states) desperately needed. But could a high-quality MAT program be delivered online? And through a partnership with a for-profit entity (2Tor), no less? Really?

    Early results about the program known as MAT@USC have greatly pleased Gallagher and USC. One hundred forty-four students enrolled in the Rossier School of Education program's first full cohort in May, 50 percent more than anticipated and significantly larger than the 100 students who started at that time in the traditional master's in teaching program on the university's Los Angeles campus.

    And this month, a new group of 302 students started in the second of three planned "starts" per year, meaning that USC has already quadrupled the number of would-be teachers it is educating this year and, depending on how many students enroll in January, is on track to increase it a few times more than that.

    It will be a while -- years, probably, until outcomes on teacher certification exams are in and the program's graduates have been successful (or not) in the classroom -- before questions about the program's quality and performance are fully answered (though officials there point out that the technology platform, like much online learning software, provides steady insight into how successfully students are staying on track). But USC officials say that short of quantitative measures such as those, they believe the online program is attracting equally qualified students and is providing an education that is fully equivalent to Rossier's on-ground master's program -- goals that the institution viewed as essential so as not to "dilute the brand" of USC's well-regarded program.

    "So far, we've beaten the odds," says Gallagher. "We're growing in scale while continuing to ensure that we have a really good program."

    "Scale" is a big buzzword in higher education right now, as report after report and new undertaking after new undertaking -- including the Obama administration's American Graduation Initiative -- underscore the perceived need for more Americans with postsecondary credentials. Many institutions -- especially community colleges and for-profit colleges -- are taking it to heart, expanding their capacity and enrolling more students. The push is less evident at other types of colleges and universities, and almost a foreign concept at highly selective institutions.

    That's what is atypical, if not downright exceptional, about the experiment at USC, which Inside Higher Ed explored in concept last fall. At that time, some experts on distance learning and teacher education -- not unlike some of Gallagher's dean peers -- wondered whether students would be willing to pay the tuition of an expensive private university for an online program, among other things.

    Officials at the university and 2Tor -- the company formed by the Princeton Review founder John Katzman, which has provided the technology and administrative infrastructure for the USC program -- were confident that they would be able to tap into the market of Ivy League and other selective college graduates who flock to programs like Teach for America in ever-growing numbers each year but are also interested in getting a formal teaching credential right away.

    While those students certainly have other options -- major public universities such as the University of Wisconsin at Madison and the University of Virginia, and private institutions like Columbia University's Teachers College and Vanderbilt University, among others -- all of them require students to take up residence in way that doesn't work for everyone.

    Haley Hiatt, a 2005 graduate of Brigham Young University, actually does reside in Los Angeles -- but she's also a relatively new mother who "didn't want to have to put [her nearly 2-year-old daughter] in day care all the time," she says. So after first contemplating master's programs in history at institutions like Vanderbilt and George Washington University, and then weighing a series of graduate programs at institutions in and around Los Angeles, Hiatt entered the first cohort of the MAT@USC program. She now joins her fellow students in "face to face" meetings (on the Internet, using video chat technology) twice a week, but otherwise does most of her other course work on her own time. "I find it takes more discipline than I needed when I was in the classroom" every day at BYU, she says.

    Of the initial cohort of 144 students, about 5 percent got their bachelor's degrees from Ivy League institutions, and about 10 percent came from the crosstown rival University of California at Los Angeles, says Gallagher. About 10 percent hail from historically black colleges and universities -- the proportion of students in the online program who are black (about 11 percent) is about double the proportion in the on-ground program, though the campus program has slightly higher minority numbers overall. Students in the online program are somewhat older (average age 28 vs. 25 for the face-to-face program) and the average college grade point average is identical for both iterations of the program: 3.0, USC officials say.

    Other numbers please Gallagher even more. A greater proportion of students in the online program are in science-related fields than is true in the campus-based program, a heartening sign given the pressure on American teacher education programs to ratchet up the number of science teachers they produce.

    Continued in article

    Jensen Comment
    The key to this kind of explosion in online enrollments is mostly triggered by reputation of the university in general.

    Many universities are finding online programs so popular that they are now treating them like cash cows where students pay more for online tuition than for onsite tuition. One university that openly admits this is the University of Wisconsin at Milwaukee (UMW).

    Bob Jensen's threads on why so many students prefer online education to onsite education (even apart from cost savings) ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation
    Also see http://faculty.trinity.edu/rjensen/assess.htm#OnlineVersusOnsite

    Bob Jensen's threads on distance education training and education alternatives ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm

    Bob Jensen's threads on careers are at
    http://faculty.trinity.edu/rjensen/Bookbob1.htm#careers

     


    "Students prefer online courses:  Classes popular with on-campus students," CNN, January 13, 2006 --- http://www.cnn.com/2006/EDUCATION/01/13/oncampus.online.ap/index.html

    At least 2.3 million people took some kind of online course in 2004, according to a recent survey by The Sloan Consortium, an online education group, and two-thirds of colleges offering "face-to-face" courses also offer online ones. But what were once two distinct types of classes are looking more and more alike -- and often dipping into the same pool of students.

    At some schools, online courses -- originally intended for nontraditional students living far from campus -- have proved surprisingly popular with on-campus students. A recent study by South Dakota's Board of Regents found 42 percent of the students enrolled in its distance-education courses weren't so distant: they were located on campus at the university that was hosting the online course.

    Numbers vary depending on the policies of particular colleges, but other schools also have students mixing and matching online and "face-to-face" credits. Motives range from lifestyle to accommodating a job schedule to getting into high-demand courses.

    Classes pose challenges Washington State University had about 325 on-campus undergraduates taking one or more distance courses last year. As many as 9,000 students took both distance and in-person classes at Arizona State Univesity last year.

    "Business is really about providing options to their customers, and that's really what we want to do," said Sheila Aaker, extended services coordinator at Black Hills State.

    Still, the trend poses something of a dilemma for universities.

    They are reluctant to fill slots intended for distance students with on-campus ones who are just too lazy to get up for class. On the other hand, if they insist the online courses are just as good, it's hard to tell students they can't take them. And with the student population rising and pressing many colleges for space, they may have little choice.

    In practice, the policy is often shaded. Florida State University tightened on-campus access to online courses several years ago when it discovered some on-campus students hacking into the system to register for them. Now it requires students to get an adviser's permission to take an online class.

    Online, in-person classes blending Many schools, like Washington State and Arizona State, let individual departments and academic units decide who can take an online course. They say students with legitimate academic needs -- a conflict with another class, a course they need to graduate that is full -- often get permission, though they still must take some key classes in person.

    In fact, the distinction between online and face-to-face courses is blurring rapidly. Many if not most traditional classes now use online components -- message boards, chat rooms, electronic filing of papers. Students can increasingly "attend" lectures by downloading a video or a podcast.

    At Arizona State, 11,000 students take fully online courses and 40,000 use the online course management system, which is used by many "traditional" classes. Administrators say the distinction between online and traditional is now so meaningless it may not even be reflected in next fall's course catalogue.

    Arizone State's director of distance learning, Marc Van Horne, says students are increasingly demanding both high-tech delivery of education, and more control over their schedules. The university should do what it can to help them graduate on time, he says.

    "Is that a worthwhile goal for us to pursue? I'd say 'absolutely,"' Van Horne said. "Is it strictly speaking the mission of a distance learning unit? Not really."

    Then there's the question of whether students are well served by taking a course online instead of in-person. Some teachers are wary, saying showing up to class teaches discipline, and that lectures and class discussions are an important part of learning.

    But online classes aren't necessarily easier. Two-thirds of schools responding to a recent survey by The Sloan Consortium agreed that it takes more discipline for students to succeed in an online course than in a face-to-face one.

    "It's a little harder to get motivated," said Washington State senior Joel Gragg, who took two classes online last year (including "the psychology of motivation"). But, he said, lectures can be overrated -- he was still able to meet with the professor in person when he had questions -- and class discussions are actually better online than in a college classroom, with a diverse group exchanging thoughtful postings.

    "There's young people, there's old people, there's moms, professional people," he said. "You really learn a lot more."

    Bob Jensen's threads on distance education and training alternatives are at
    http://faculty.trinity.edu/rjensen/crossborder.htm

     


    The 2006 National Survey of Student Engagement, released November 13, 2006, for the first time offers a close look at distance education, offering provocative new data suggesting that e-learners report higher levels of engagement, satisfaction and academic challenge than their on-campus peers --- http://nsse.iub.edu/NSSE_2006_Annual_Report/index.cfm

    "The Engaged E-Learner," by Elizabeth Redden, Inside Higher Ed, November 13, 2006 --- http://www.insidehighered.com/news/2006/11/13/nsse

    The 2006 National Survey of Student Engagement, released today, for the first time offers a close look at distance education, offering provocative new data suggesting that e-learners report higher levels of engagement, satisfaction and academic challenge than their on-campus peers.

    Beyond the numbers, however, what institutions choose to do with the data promises to attract extra attention to this year’s report.

    NSSE is one of the few standardized measures of academic outcomes that most officials across a wide range of higher education institutions agree offers something of value.Yet NSSE does not release institution-specific data, leaving it to colleges to choose whether to publicize their numbers.

    Colleges are under mounting pressure, however, to show in concrete, measurable ways that they are successfully educating students, fueled in part by the recent release of the report from the Secretary of Education’s Commission on the Future of Higher Education, which emphasizes the need for the development of comparable measures of student learning. In the commission’s report and in college-led efforts to heed the commission’s call, NSSE has been embraced as one way to do that. In this climate, will a greater number of colleges embrace transparency and release their results?

    Anywhere between one-quarter and one-third of the institutions participating in NSSE choose to release some data, said George Kuh, NSSE’s director and a professor of higher education at Indiana University at Bloomington. But that number includes not only those institutions that release all of the data, but also those that pick and choose the statistics they’d like to share.

    In the “Looking Ahead” section that concluded the 2006 report, the authors note that NSSE can “contribute to the higher education improvement and accountability agenda,” teaming with institutions to experiment with appropriate ways to publicize their NSSE data and developing common templates for colleges to use. The report cautions that the data released for accountability purposes should be accompanied by other indicators of student success, including persistence and graduation rates, degree/certificate completion rates and measurements of post-college endeavors.

    “Has this become a kind of a watershed moment when everybody’s reporting? No. But I think what will happen as a result of the Commission on the Future of Higher Ed, Secretary (Margaret) Spelling’s workgroup, is that there is now more interest in figuring out how to do this,” Kuh said.

    Charles Miller, chairman of the Spellings commission, said he understands that NSSE’s pledge not to release institutional data has encouraged colleges to participate — helping the survey, first introduced in 1999, get off the ground and gain wide acceptance. But Miller said he thinks that at this point, any college that chooses to participate in NSSE should make its data public.

    “Ultimately, the duty of the colleges that take public funds is to make that kind of data public. It’s not a secret that the people in the academy ought to have. What’s the purpose of it if it’s just for the academy? What about the people who want to get the most for their money?”

    Participating public colleges are already obliged to provide the data upon request, but Miller said private institutions, which also rely heavily on public financial aid funds, should share that obligation.

    Kuh said that some colleges’ reluctance to publicize the data stems from a number of factors, the primary reason being that they are not satisfied with the results and feel they might reflect poorly on the institution.

    In addition, some college officials fear that the information, if publicized, may be misused, even conflated to create a rankings system. Furthermore, sharing the data would represent a shift in the cultural paradigm at some institutions used to keeping sensitive data to themselves, Kuh said.

    “The great thing about NSSE and other measures like it is that it comes so close to the core of what colleges and universities are about — teaching and learning. This is some of the most sensitive information that we have about colleges and universities,” Kuh said.

    But Miller said the fact that the data get right to the heart of the matter is precisely why it should be publicized. “It measures what students get while they’re at school, right? If it does that, what’s the fear of publishing it?” Miller asked. “If someone would say, ‘It’s too hard to interpret,’ then that’s an insult to the public.” And if colleges are afraid of what their numbers would suggest, they shouldn’t participate in NSSE at all, Miller said.

    However, Douglas Bennett, president of Earlham College in Indiana and chair of NSSE’s National Advisory Board, affirmed NSSE’s commitment to opening survey participation to all institutions without imposing any pressure that they should make their institutional results public. “As chair of the NSSE board, we believe strongly that institutions own their own data and what they do with it is up to them. There are a variety of considerations institutions are going to take into account as to whether or not they share their NSSE data,” Bennett said.

    However, as president of Earlham, which releases all of its NSSE data and even releases its accreditation reports, Bennett said he thinks colleges, even private institutions, have a professional and moral obligation to demonstrate their effectiveness in response to accountability demands — through NSSE or another means a college might deem appropriate.

    This Year’s Survey

    The 2006 NSSE survey, which is based on data from 260,000 randomly-selected first-year and senior students at 523 four-year institutions(NSSE’s companion survey, the Community College Survey of Student Engagement, focuses on two-year colleges) looks much more deeply than previous iterations of the survey did into the performance of online students.

    Distance learning students outperform or perform on par with on-campus students on measures including level of academic challenge; student-faculty interaction; enriching educational experiences; and higher-order, integrative and reflective learning; and gains in practical competence, personal and social development, and general education. They demonstrate lower levels of engagement when it comes to active and collaborative learning.

    Karen Miller, a professor of education at the University of Louisville who studies online learning, said the results showing higher or equal levels of engagement among distance learning students make sense: “If you imagine yourself as an undergraduate in a fairly large class, you can sit in that class and feign engagement. You can nod and make eye contact; your mind can be a million miles away. But when you’re online, you’ve got to respond, you’ve got to key in your comments on the discussion board, you’ve got to take part in the group activities.

    Plus, Miller added, typing is a more complex psycho-motor skill than speaking, requiring extra reflection. “You see what you have said, right in front of your eyes, and if you realize it’s kind of half-baked you can go back and correct it before you post it.”

    Also, said Kuh, most of the distance learners surveyed were over the age of 25. “Seventy percent of them are adult learners. These folks are more focused; they’re better able to manage their time and so forth,” said Kuh, who added that many of the concerns surrounding distance education focus on traditional-aged students who may not have mastered their time management skills.

    Among other results from the 2006 NSSE survey:

    Bob Jensen's threads on distance education and training alternatives around the world are at http://faculty.trinity.edu/rjensen/Crossborder.htm


    Soaring Popularity of E-Learning Among Students But Not Faculty
    How many U.S. students took at least on online course from a legitimate college in Fall 2005?

    More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education . . . ‘We didn’t become faculty to sit in front of a computer screen,’
    Elia Powers, "Growing Popularity of E-Learning, Inside Higher Ed, November 10, 2006 --- http://www.insidehighered.com/news/2006/11/10/online

    More students are taking online college courses than ever before, yet the majority of faculty still aren’t warming up to the concept of e-learning, according to a national survey from the country’s largest association of organizations and institutions focused on online education.

    Roughly 3.2 million students took at least one online course from a degree-granting institution during the fall 2005 term, the Sloan Consortium said. That’s double the number who reported doing so in 2002, the first year the group collected data, and more than 800,000 above the 2004 total. While the number of online course participants has increased each year, the rate of growth slowed from 2003 to 2004.

    The report, a joint partnership between the group and the College Board, defines online courses as those in which 80 percent of the content is delivered via the Internet.

    The Sloan Survey of Online Learning, “Making the Grade: Online Education in the United States, 2006,” shows that 62 percent of chief academic officers say that the learning outcomes in online education are now “as good as or superior to face-to-face instruction,” and nearly 6 in 10 agree that e-learning is “critical to the long-term strategy of their institution.” Both numbers are up from a year ago.

    Researchers at the Sloan Consortium, which is administered through Babson College and Franklin W. Olin College of Engineering, received responses from officials at more than 2,200 colleges and universities across the country. (The report makes few references to for-profit colleges, a force in the online market, in part because of a lack of survey responses from those institutions.)

    Much of the report is hardly surprising. The bulk of online students are adult or “nontraditional” learners, and more than 70 percent of those surveyed said online education reaches students not served by face-to-face programs.

    What stands out is the number of faculty who still don’t see e-learning as a valuable tool. Only about one in four academic leaders said that their faculty members “accept the value and legitimacy of online education,” the survey shows. That number has remained steady throughout the four surveys. Private nonprofit colleges were the least accepting — about one in five faculty members reported seeing value in the programs.

    Elaine Allen, co-author of the report and a Babson associate professor of statistics and entrepreneurship, said those numbers are striking.

    “As a faculty member, I read that response as, ‘We didn’t become faculty to sit in front of a computer screen,’ ” Allen said. “It’s a very hard adjustment. We sat in lectures for an hour when we were students, but there’s a paradigm shift in how people learn.”

    Barbara Macaulay, chief academic officer at UMass Online, which offers programs through the University of Massachusetts, said nearly all faculty members teaching the online classes there also teach face-to-face courses, enabling them to see where an online class could fill in the gap (for instance, serving a student who is hesitant to speak up in class).

    She said she isn’t surprised to see data illustrating the growing popularity of online courses with students, because her program has seen rapid growth in the last year. Roughly 24,000 students are enrolled in online degree and certificate courses through the university this fall — a 23 percent increase from a year ago, she said.

    “Undergraduates see it as a way to complete their degrees — it gives them more flexibility,” Macaulay said.

    The Sloan report shows that about 80 percent of students taking online courses are at the undergraduate level. About half are taking online courses through community colleges and 13 percent through doctoral and research universities, according to the survey.

    Nearly all institutions with total enrollments exceeding 15,000 students have some online offerings, and about two-thirds of them have fully online programs, compared with about one in six at the smallest institutions (those with 1,500 students or fewer), the report notes. Allen said private nonprofit colleges are often set in enrollment totals and not looking to expand into the online market.

    The report indicates that two-year colleges are particularly willing to be involved in online learning.

    “Our institutions tend to embrace changes a little more readily and try different pedagogical styles,” said Kent Phillippe, a senior research associate at the American Association of Community Colleges. The report cites a few barriers to what it calls the “widespread adoption of online learning,” chief among them the concern among college officials that some of their students lack the discipline to succeed in an online setting. Nearly two-thirds of survey respondents defined that as a barrier.

    Allen, the report’s co-author, said she thinks that issue arises mostly in classes in which work can be turned in at any time and lectures can be accessed at all hours. “If you are holding class in real time, there tends to be less attrition,” she said. The report doesn’t differentiate between the live and non-live online courses, but Allen said she plans to include that in next year’s edition.

    Few survey respondents said acceptance of online degrees by potential employers was a critical barrier — although liberal arts college officials were more apt to see it as an issue.

    November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

    Hi Bob:

    One reason why might be what I have seen. The in residence accounting students that I talk with take online classes here because they are EASY and do not take much work. This would be very popular with students but not generally so with faculty.

    John

    November 10, 2006 reply from Bob Jensen

    Hi John,

    Then there is a quality control problem whereever this is a fact. It would be a travesty if any respected college had two or more categories of academic standards or faculty assignments.

    Variations in academic standards have long been a problem between part-time versus full-time faculty, although grade inflation can be higher or lower among part-time faculty. In one instance, it’s the tenure-track faculty who give higher grades because they're often more worried about student evaluations. At the opposite extreme it is part-time faculty who give higher grades for many reasons that we can think of if we think about it.

    One thing that I'm dead certain about is that highly motivated students tend to do better in online courses ceteris paribus. Reasons are mainly that time is used more efficiently in getting to class (no wasted time driving or walking to class), less wasted time getting teammates together on team projects, and fewer reasons for missing class.

    Also online alternatives offer some key advantages for certain types of handicapped students --- http://faculty.trinity.edu/rjensen/000aaa/thetools.htm 

    My opinions on learning advantages of E-Learning were heavily influenced by the most extensive and respected study of online versus onsite learning experiments in the SCALE experiments using full-time resident students at the University of Illinois --- http://faculty.trinity.edu/rjensen/255wp.htm#Illinois 

    In the SCALE experiments cutting across 30 disciplines, it was generally found that motivated students learned better online then their onsite counterparts having the same instructors. However, there was no significant impact on students who got low grades in online versus onsite treatment groups.

    I think the main problem with faculty is that online teaching tends to burn out instructors more frequently than onsite instructors. This was also evident in the SCALE experiments. When done correctly, online courses are more communication intent between instructors and faculty. Also, online learning takes more preparation time if it is done correctly. 

    My hero for online learning is still Amy Dunbar who maintains high standards for everything:

    http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm

    http://faculty.trinity.edu/rjensen/book01q4.htm#Dunbar

    Bob Jensen

    November 10, 2006 reply from John Brozovsky [jbrozovs@vt.edu]

    Hi Bob:

    Also why many times it is not done 'right'. Not done right they do not get the same education. Students generally do not complain about getting 'less for their money'. Since we do not do online classes in department the ones the students are taking are the university required general education and our students in particular are not unhappy with being shortchanged in that area as they frequently would have preferred none anyway.

    John

     

    Bob Jensen's threads on open sharing and education technology are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Bob Jensen's threads on online training and education alternatives are at http://faculty.trinity.edu/rjensen/crossborder.htm

    Motivations for Distance Learning --- http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#Motivations

    Bob Jensen's threads on the dark side of online learning and teaching are at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm


    October 5, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

    STUDENTS' PERCEPTIONS OF ONLINE LEARNING

    "The ultimate question for educational research is how to optimize instructional designs and technology to maximize learning opportunities and achievements in both online and face-to-face environments." Karl L.Smart and James J. Cappel studied two undergraduate courses -- an elective course and a required course -- that incorporated online modules into traditional classes. Their research of students' impressions and satisfaction with the online portions of the classes revealed mixed results:

    -- "participants in the elective course rated use of the learning modules slightly positive while students in the required course rated them slightly negative"

    -- "while students identified the use of simulation as the leading strength of the online units, it was also the second most commonly mentioned problem of these units"

    -- "students simply did not feel that the amount of time it took to complete the modules was worth what was gained"

    The complete paper, "Students' Perceptions of Online Learning: A Comparative Study" (JOURNAL OF INFORMATION TECHNOLOGY EDUCATION, vol. 5, 2006, pp. 201-19), is available online at http://jite.org/documents/Vol5/v5p201-219Smart54.pdf.

    Current and back issues of the Journal of Information Technology Education (JITE) [ISSN 1539-3585 (online) 1547-9714 (print)] are available free of charge at http://jite.org/. The peer-reviewed journal is published annually by the Informing Science Institute. For more information contact: Informing Science Institute, 131 Brookhill Court, Santa Rosa, California 95409 USA; tel: 707-531-4925; fax: 480-247-5724;

    Web: http://informingscience.org/.



    I have heard some faculty argue that asynchronous Internet courses just do not mesh with Trinity's on-campus mission. The Scale Experiments at the University of Illinois indicate that many students learn better and prefer online courses even if they are full-time, resident students. The University of North Texas is finding out the same thing. There may be some interest in what our competition may be in the future even for full-time, on-campus students at private as well as public colleges and universities.
    On January 17, 2003, Ed Scribner forwarded this article from The Dallas Morning News

    Students Who Live on Campus Choosing Internet Courses Syndicated From: The Dallas Morning News

    DALLAS - Jennifer Pressly could have walked to a nearby lecture hall for her U.S. history class and sat among 125 students a few mornings a week.

    But the 19-year-old freshman at the University of North Texas preferred rolling out of bed and attending class in pajamas at her dorm-room desk. Sometimes she would wait until Saturday afternoon.

    The teen from Rockwall, Texas, took her first college history class online this fall semester. She never met her professor and knew only one of her 125 classmates: her roommate.

    "I take convenience over lectures," she said. "I think I would be bored to death if I took it in lecture."

    She's part of a controversial trend that has surprised many university officials across the country. Given a choice, many traditional college students living on campus pick an online course. Most universities began offering courses via the Internet in the late 1990s to reach a different audience - older students who commute to campus and are juggling a job and family duties.

    During the last year, UNT began offering an online option for six of its highest-enrollment courses that are typically taught in a lecture hall with 100 to 500 students. The online classes, partly offered as a way to free up classroom space in the growing school, filled up before pre-registration ended, UNT officials said. At UNT, 2,877 of the about 23,000 undergraduates are taking at least one course online.

    Nationwide, colleges are reporting similar experiences, said Sally Johnstone, director of WCET, a Boulder, Colo., cooperative of state higher education boards and universities that researches distance education. Kansas State University, in a student survey last spring, discovered that 80 percent of its online students were full-time and 20 percent were part-time, the opposite of the college's expectations, Johnstone said.

    "Why pretend these kids want to be in a class all the time? They don't, but kids don't come to campus to sit in their dorm rooms and do things online exclusively," she said. "We're in a transition, and it's a complex one."

    The UT Telecampus, a part of the University of Texas System that serves 15 universities and research facilities, began offering online undergraduate classes in state-required courses two years ago. Its studies show that 80 percent of the 2,260 online students live on campus, and the rest commute.

    Because they are restricted to 30 students each, the UT System's online classes are touted as a more intimate alternative to lecture classes, said Darcy Hardy, director of the UT Telecampus.

    "The freshman-sophomore students are extremely Internet-savvy and understand more about online options and availability than we could have ever imagined," Hardy said.

    Online education advocates say professors can reach students better online than in lecture classes because of the frequent use of e-mail and online discussion groups. Those who oppose the idea say they worry that undergraduates will miss out on the debate, depth and interaction of traditional classroom instruction.

    UNT, like most colleges, is still trying to figure out the effect on its budget. The professorial salary costs are the same, but an online course takes more money to develop. The online students, however, free up classroom space and eliminate the need for so many new buildings in growing universities. The price to enroll is typically the same for students, whether they go to a classroom or sit at their computer.

    Mike Campbell, a history professor at UNT for 36 years, does not want to teach an online class, nor does he approve of offering undergraduate history via the Internet.

    "People shouldn't be sitting in the dorms doing this rather than walking over here," he said. "That is based on a misunderstanding of what matters in history."

    In his class of 125, he asks students rhetorical questions they answer en masse to be sure they're paying attention, he said. He goes beyond the textbook, discussing such topics as the moral and legal issues surrounding slavery.

    He said he compares the online classes to the correspondence courses he hated but had to teach when he came to UNT in 1966. Both methods are too impersonal, he said, recalling how he mailed assignments and tests to correspondence students.

    UNT professors who teach online say the courses are interactive, unlike correspondence courses.

    Matt Pearcy has lectured 125 students for three hours at a time.

    "You'd try to be entertaining," he said. "You have students who get bored after 45 minutes, no matter what you're doing. They're filling out notes, doing their to-do list, reading their newspaper in front of you."

    In his online U.S. history class at UNT, students get two weeks to finish each lesson. They read text, complete click-and-drag exercises, like one that matches terms with historical figures, and take quizzes. They participate in online discussions and group projects, using e-mail to communicate.

    "Hands-down, I believe this is a more effective way to teach," said Pearcy, who is based in St. Paul, Minn. "In this setting, they go to the class when they're ready to learn. They're interacting, so they're paying attention."

    Pressly said she liked the hands-on work in the online class. She could do crossword puzzles to reinforce her history lessons. Or she could click an icon and see what Galileo saw through his telescope in the 17th century.

    "I took more interest in this class than the other ones," she said.

    The class, though, required her to be more disciplined, she said, and that added stress. Two weeks in a row, she waited till 11:57 p.m. Sunday - three minutes before the deadline - to turn in her assignment.

    Online courses aren't for everybody.

    "The thing about sitting in my dorm, there's so much to distract me," said Trevor Shive, a 20-year-old freshman at UNT. "There's the Internet. There's TV. There's radio."

    He said students on campus should take classes in the real, not virtual, world.

    "They've got legs; they can walk to class," he said.

    Continued in the article at http://www.dallasnews.com/ 


    January 17, 2003 response from John L. Rodi [jrodi@IX.NETCOM.COM

    I would have added one additional element. Today I think too many of us tend to teach accounting the way you teach drivers education. Get in the car turn on the key and off you go. If something goes wrong with the car you a sunk since you nothing conceptually. Furthermore, it makes you a victim of those who do. Conceptual accounting education teaches you to respond to choices, that is not only how to drive but what to drive. Thanks for the wonderful analogy.

    John Rodi 
    El Camino College

    January 21 reply from 

    On the subject of technology and teaching accounting, I wonder how many of you are in the SAP University Alliance and using it for accounting classes. I just teach advanced financial accounting, and have not found a use for it there. However, I have often felt that there is a place for it in intro financial, in managerial and in AIS. On the latter, there is at least one good text book containing SAP exercises and problems.

    Although there are over 400 universities in the world in the program, one of the areas where use is lowest is accounting courses. The limitation appears to be related to a combination of the learning curve for professors, together with an uncertainty as to how it can be used to effectively teach conceptual material or otherwise fit into curricula.

    Gerald Trites, FCA 
    Professor of Accounting and Information Systems 
    St Francis Xavier University 
    Antigonish, Nova Scotia 
    Website
    - http://www.stfx.ca/people/gtrites 

    The SAP University Alliance homepage is at http://www.sap.com/usa/company/ua/ 

    In today's fast-paced, technically advanced society, universities must master the latest technologies, not only to achieve their own business objectives cost-effectively but also to prepare the next generation of business leaders. To meet the demands for quality teaching, advanced curriculum, and more technically sophisticated graduates, your university is constantly searching for innovative ways of acquiring the latest information technology while adhering to tight budgetary controls.

    SAP can help. A world leader in the development of business software, SAP is making its market-leading, client/server-based enterprise software, the R/3® System, available to the higher education community. Through our SAP University Alliance Program, we are proud to offer you the world's most popular software of its kind for today's businesses. SAP also provides setup, follow-up consulting, and R/3 training for faculty - all at our expense. The SAP R/3 System gives you the most advanced software capabilities used by businesses of all sizes and in all industries around the world.

    There are many ways a university can benefit from an educational alliance with SAP. By partnering with SAP and implementing the R/3 System, your university can:


    January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

    No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

    The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


    DISTANCE LEARNING AND FACULTY CONCERNS

    Despite the growing number of distance learning programs, faculty are often reluctant to move their courses into the online medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005) Jennifer McLean discusses several areas that influence faculty resistance, including: the perception that technical support and training is lacking, the fear of being replaced by technology, and the absence of a clearly-understood institutional vision for distance learning. The paper is available online at
    http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm

    The Online Journal of Distance Learning Administration is a free, peer-reviewed quarterly published by the Distance and Distributed Education Center, The State University of West Georgia, 1600 Maple Street, Carrollton, GA 30118 USA; Web: http://www.westga.edu/~distance/jmain11.html .

     


    December 10, 2004 message from Carolyn Kotlas [kotlas@email.unc.edu

    E-LEARNING ONLINE PRESENTATIONS

    The University of Calgary Continuing Education sponsors Best Practices in E-Learning, a website that provides a forum for anyone working in the field to share their best practices. This month's presentations include:

    -- "To Share or Not To Share: There is No Question" by Rosina Smith Details a new model for permitting "the reuse, multipurposing, and repurposing of existing content"

    -- "Effective Management of Distributed Online Educational Content" by Gary Woodill "[R]eviews the history of online educational content, and argues that the future is in distributed content learning management systems that can handle a wide diversity of content types . . . identifies 40 different genres of online educational content (with links to examples)"

    Presentations are in various formats, including Flash, PDF, HTML, and PowerPoint slides. Registered users can interact with the presenters and post to various discussion forums on the website. There is no charge to register and view presentations. You can also subscribe to their newsletter which announces new presentations each month. (Note: No archive of past months' presentations appears to be on the website.)

    For more information, contact: Rod Corbett, University of Calgary Continuing Education; tel:403-220-6199 or 866-220-4992 (toll-free); email: rod.corbett@ucalgary.ca ; Web: http://elearn.ucalgary.ca/showcase/


    NEW APPROACHES TO EVALUATING ONLINE LEARNING

    "The clear implication is that online learning is not good enough and needs to prove its worth before gaining full acceptance in the pantheon of educational practices. This comparative frame of reference is specious and irrelevant on several counts . . ." In "Escaping the Comparison Trap: Evaluating Online Learning on Its Own Terms (INNOVATE, vol. 1, issue 2, December 2004/January 2005), John Sener writes that, rather than being inferior to classroom instruction, "[m]any online learning practices have demonstrated superior results or provided access to learning experiences not previously possible." He describes new evaluation models that are being used to judge online learning on its own merits. The paper is available online at http://www.innovateonline.info/index.php?view=article&id=11&action=article.

    You will need to register on the Innovate website to access the paper; there is no charge for registration and access.

    Innovate [ISSN 1552-3233] is a bimonthly, peer-reviewed online periodical published by the Fischler School of Education and Human Services at Nova Southeastern University. The journal focuses on the creative use of information technology (IT) to enhance educational processes in academic, commercial, and government settings. Readers can comment on articles, share material with colleagues and friends, and participate in open forums. For more information, contact James L. Morrison, Editor-in-Chief, Innovate; email: innovate@nova.edu ; Web: http://www.innovateonline.info/.

     


    I read the following for a scheduled program of the 29th Annual Accounting Education Conference, October 17-18, 2003  Sponsored by the Texas CPA Society, San Antonio Airport Hilton.

    WEB-BASED AND FACE-TO-FACE INSTRUCTION:
        A COMPARISON OF LEARNING OUTCOMES IN A FINANCIAL ACCOUNTING COURSE

    Explore the results of a study conducted over a four-semester period that focused on the same graduate level financial accounting course that was taught using web-based instruction and face-to-face instruction.  Discuss the comparison of student demographics and characteristics, course satisfaction, and comparative statistics related to learning outcomes.

    Doug Rusth/associate professor/University of Houston at Clear Lake/Clear Lake


    Bob Jensen's threads on asynchronous versus synchronous learning are at http://faculty.trinity.edu/rjensen/255wp.htm 
    Note in particular the research outcomes of The Scale Experiment at the University of Illinois --- http://faculty.trinity.edu/rjensen/255wp.htm#Illinois 

    Once again, my advice to new faculty is at http://faculty.trinity.edu/rjensen/000aaa/newfaculty.htm 


    Minimum Grades as a School Policy

    Question
    Should a student who gets a zero (for not doing anything) or 23% (for doing something badly) on an assignment, exam, or term paper be automatically (as a matter of school policy) upgraded to a 60% no matter what proportion the grade is toward a course's final grade?
    Should a student get 60% even if he or she fails to show up for an examination?

    Jensen Comment
    This could lead to some strategies like "don't spend any time on the term paper and concentrate on passing the final examination or vice versa."
    Such strategies are probably not in the spirit of the course design, especially when the instructor intended for students to have to write a paper.

    "Time to Add Basket Weaving as a Course," by Ben Baker, The Irascible Professor, June 22, 2008 --- http://irascibleprofessor.com/comments-06-22-08.htm

    Bob Jensen's threads on higher education controversies are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

    Bob Jensen's threads on assessment are at http://faculty.trinity.edu/rjensen/assess.htm


    If a student doesn’t come to school,” he continued, “how can you justify passing that kid?
     Fernanda Santos

    "Bronx School’s Top Ranking Stirs Wider Doubts About Rating System," by Fernanda Santos, The New York Times, January 20, 2011 ---
    http://www.nytimes.com/2011/01/21/education/21grades.html?_r=1&hpw

    One of the trademarks of New York City’s school accountability system is an equation that assigns every school a letter grade, A through F, based on a numerical score from 1 to 100.

    Bronx School’s Top Ranking Stirs Wider Doubts About Rating System By FERNANDA SANTOS Published: January 20, 2011

    Recommend Twitter Sign In to E-Mail Print Reprints Share

    One of the trademarks of New York City’s school accountability system is an equation that assigns every school a letter grade, A through F, based on a numerical score from 1 to 100. Enlarge This Image Marcus Yam for The New York Times

    Lynn Passarella, facing camera, the principal of the Theater Arts Production Company School, outside the school on Thursday. She declined to comment on the allegations about her school’s grading practices.

    A parent pulling up the latest report card for the Theater Arts Production Company School in the Bronx would find that it earned the score of 106.3 (including extra credit).

    But that very empiric-sounding number, which was the highest of any high school in the city, is based in part on subjective measures like “academic expectations” and “engagement,” as measured by voluntary parent, teacher and student surveys.

    And, according to some teachers at the school, even the more tangible factors in the score — graduation rates and credits earned by students — were not to be taken at face value. The school has a policy that no student who showed up for class should fail, and even some who missed many days of school were still allowed to pass and graduate.

    The Department of Education, which revealed on Wednesday that it was investigating grading practices at the school, says that it has a team devoted to analyzing school statistics every year and looking for red flags like abnormal increases in student scores or dropout rates. But a department official said that nothing in its data had raised suspicions about the school, known as Tapco, until a whistle-blower filed a complaint in October.

    Still, in a data-driven system where letter grades can determine a school’s fate, one big question looms over the investigation: If the allegations turn out to be true, are they an exception or a sign of a major fault in the school accountability system?

    “The D.O.E. has absolutely created a climate for these types of scandals to happen,” Michael Mulgrew, the president of the teachers’ union, said in an interview. “Their culture of ‘measure everything and question nothing a principal tells you’ makes it hard to figure out what’s real and what’s not real inside a school.”

    There are many gradations of impropriety, and it is unclear if any of them apply to Tapco, which has about 500 students and also includes a middle school. The school’s teacher handbook states that no student should fail a class if he or she regularly attends, and that students who miss work should be given “multiple opportunities for student success and work revision.”

    Current and former teachers at the school said that even students who were regularly absent were given passing grades, in some cases with course credits granted by the principal without a teacher’s knowledge. Some students’ records showed credits for courses the school did not offer.

    The investigation over the irregularities at Tapco, which began in October, also include allegations that the school’s principal, Lynn Passarella, manipulated teacher and parent surveys, which represent 10 of the 100 points in a school’s score. Graduation rates, passing rates on Regents exams and earned credits constitute most of the score.

    Ms. Passarella declined to comment on the allegations.

    A spokesman for the Education Department, Matthew Mittenthal, said: “We take every allegation of misconduct seriously, and hope that the public can reserve judgment until the investigation is complete.”

    Sometimes, the analysts who pore over the data uncover serious problems. Last year, the Education Department lowered the overall scores of three high schools. At Jamaica High School in Queens, the department discovered that the school had improperly granted credit to some transfer students. At John F. Kennedy High School in the Bronx and W. H. Maxwell Career and Technical Education High School in Brooklyn, administrators could not provide documentation to explain why some students had left the schools.

    Since 2008, at least four principals and assistant principals have been reprimanded — two retired, one served a 30-day unpaid suspension and another paid a $6,500 fine — on charges that included tampering with tests.

    Principals can get as much as $25,000 in bonuses if their schools meet or exceed performance targets, and some experts are skeptical that the department’s system of checks and balances is as trustworthy as it should be, particularly when money is at stake.

    Tapco’s administrators got a bonus once, for the 2008-9 school year, when the high school’s overall score was 85.8, which earned it an A. (The middle school scored 73.) Ms. Passarella received $7,000, while her assistant principals got $3,500 each, according to the Education Department. (Administrator bonuses for 2009-10 performance have not been doled out.)

    “There’s an inherent temptation towards corruption when you create a situation where there are rewards for things like higher test scores or favorable surveys,” said Sol Stern, an education researcher at the Manhattan Institute, a conservative research group. “It’s an invitation to cheating.”

    One mother, Cathy Joyner, whose daughter, Sapphire Connor, is a junior, said the school was excellent, adding that “the children are respectful” and that the school was “concentrating on their talents.”

    But one teacher, who spoke on condition of anonymity because he said he feared for his job, gave a different account. For teachers who do not do what the principal wants, the teacher said, “it’s difficult to get tenure.”

    “If a student doesn’t come to school,” he continued, “how can you justify passing that kid?"

    Wow:  97% of Elementary NYC Public Students Get A or B Grades --- There must be higher IQ in the water!
    "City Schools May Get Fewer A’s," by Jennifer Medina, The New York Times, January 28, 2010 ---
    http://www.nytimes.com/2010/01/30/education/30grades.html?hpw

    Michael Mulgrew, the president of the United Federation of Teachers, criticized the decision to reduce the number of schools that receive top grades.

    Continued in article


    Issues in Group Grading

    December 6, 2004 message from Glen Gray [glen.gray@CSUN.EDU

    When I have students do group projects, I require each team member complete a peer review form where the team member evaluates the other team members on 8 attributes using a scale from 0 to 4. On this form they also give their team members an overall grade. In a footnote it is explained that an “A” means the team member receives the full team grade; a “B” means a 10% reduction from the team grade; a “C” means 20% discount; a “D” means 30% discount; “E” means 40%, and an “F” means a 100% discount (in other words, the team member should get a zero).

    I assumed that the form added a little peer pressure to the team work process. In the past, students were usually pretty kind to each other. But now I have a situation where the team members on one team have all given either E’s of F’s to one of their team members. Their written comments about this guy are all pretty consistent.

    Now, I worried if I actually enforce the discount scale, things are going to get messy and the s*** is going to hit the fan. I’m going to have one very upset student. He is going to be mad at his fellow teammates.

    Has anyone had similar experience? What has the outcome been? Is there a confidentially issue here? In other words, are the other teammates also going to be upset that I revealed their evaluations? Is there going to be a lawsuit coming over the horizon?

    Glen L. Gray, PhD, CPA
    Dept. of Accounting & Information Systems
    College of Business & Economics
    California State University, Northridge
    Northridge, CA 91330-8372
    http://www.csun.edu/~vcact00f 

    Most of the replies to the message above encouraged being clear at the beginning that team evaluations would affect the final grade and then sticking to that policy.

    December 5, 2004 reply from David Fordham, James Madison University [fordhadr@JMU.EDU

    Glen, the fact that you are in California, by itself, makes it much more difficult to predict the lawsuit question. I've seen some lawsuits (and even worse, legal outcomes) from California that are completely unbelievable... Massachussetts too.

    But that said, I can share my experience that I have indeed given zero points on a group grade to students where the peer evaluations indicated unsatisfactory performance. My justification to the students in these "zero" cases has always been, "it was clear from your peers that you were not part of the group effort, and thus have not earned the points for the group assignment".

    I never divulge any specific comments, but I do tell the student that I am willing to share the comments with an impartial arbiter if they wish to have a third party confirm my evidence. To date, no student has ever contested the decision.

    Every other semester or so, I have to deduct points to some degree for unsatisfactory work as judged by peers. So far, I've had no problems making it stick, and in most cases, the affected student willingly admits their deficiency, although usually with excuses and rationales.

    But I'm not in California, and the legal precedents here are unlike those in your neck of the woods.

    If I were on the west coast, however, I'd probably be likely to at least try to stick to my principles as far as my university legal counsel would allow. Then, if my counsel didn't support me, I'd look for employment in a part of the country with a more reasonable legal environment (although that is getting harder to find every day).

    Good luck,

    David Fordham

    December 5, 2004 reply from Amy Dunbar

    Sometimes groups do blow up. Last summer I had one group ask me to remove a member. Another group had a nonfunctioning member, based on the participation scores. I formed an additional group comprised of just those two. They finally learned how to work. Needless to say they weren’t happy with me, but the good thing about teaching is that every semester we get a fresh start!

    Another issue came up for the first time, at least that I noticed. I learned that one group made a pact to rate each other high all semester long regardless of work level, and I still am not sure how I am going to avoid that problem next time around. The agreement came to light when one of the students was upset that he did so poorly on my exams. He told his senior that he had no incentive to do the homework because he could just get the answers from the other group members, and he didn’t have to worry about being graded down because of the agreement. The student was complaining that the incentive structure I set up hurt him because he needed more push do the homework. The senior told me after the class ended. Any suggestions?

    TEXAS IS GOING TO THE ROSE BOWL!!!!!!!!! Go Horns! Oops, that just slipped out.

    Amy Dunbar
    A Texas alum married to a Texas fanatic

    December 6, 2004 reply from Tracey Sutherland [tracey@AAAHQ.ORG

    Glen, My first thought on reading your post was that if things get complicated it could be useful to have a context for your grading policy that clearly establishes that it falls within common practice (in accounting and in cooperative college classrooms in general). Now you've already built some context from within accounting by gathering some responses here from a number of colleagues for whom this is a regular practice. Neal's approach can be a useful counterpart to peer evaluation for triangulation purposes -- sometimes students will report that they weren't really on-point for one reason or another (I've done this with good result but only with upper-level grad students). If the issue becomes more complicated because the student challenges your approach up the administrative ladder, you could provide additional context for the consistency of your approach in general by referencing the considerable body of literature on these issues in the higher education research literature -- you are using a well-established approach that's been frequently tested. A great resource if you need it is Barbara Millis and Phil Cottell's book "Cooperative Learning for Higher Education Faculty" published by Oryx Press (American Council on Education Series on Higher Education). They do a great job of annotating the major work in the area in a short, accessible, and concise book that also includes established criteria used for evaluating group work and some sample forms for peer assessment and self-assessment for group members (also just a great general resource for well-tested cooperative/group activities -- and tips for how to manage implementing them). Phil Cottell is an accounting professor (Miami U.) and would be a great source of information should you need it.

    Your established grading policy indicates that there would be a reduction of grade when team members give poor peer evaluations -- which wouldn't necessarily mean that you would reveal individual's evaluations but that a negative aggregate evaluation would have an effect -- and that would protect confidentiality consistently with your policy. It seems an even clearer case because all group members have given consistently negative evaluations -- as long as it's not some weird interpersonal thing -- something that sounds like that would be a red flag for the legal department. I hate it that we so often worry about legal ramifications . . . but then again it pays to be prepared!

    Peace of the season, 

    Tracey

    December 6, 2004 reply from Bob Jensen

    I once listened to an award winning AIS professor from a very major university (that after last night won't be going to the Orange Bowl this year) say that the best policy is to promise everybody an A in the course.  My question then is what the point of the confidential evaluations would be other than to make the professor feel bad at the end of the course?

    Bob Jensen


    Too Good to Grade:  How can these students get into doctoral programs and law school if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?
    Students at some top-ranked B-schools have a secret. It's something they can't share even if it means losing a job offer. It's one some have worked hard for and should be proud of, but instead they keep it to themselves. The secret is their grades.
    At four of the nation's 10 most elite B-schools -- including Harvard, Stanford, and Chicago -- students have adopted policies that prohibit them or their schools from disclosing grades to recruiters. The idea is to reduce competitiveness and eliminate the risk associated with taking difficult courses. But critics say the only thing nondisclosure reduces is one of the most important lessons B-schools should teach: accountability (see BusinessWeek, 9/12/05, "Join the Real World, MBAs"). It's a debate that's flaring up on B-school campuses across the country. (For more on this topic, log on to our B-Schools Forum.)  And nowhere is it more intense than at University of Pennsylvania's Wharton School, where students, faculty, and administrators have locked horns over a school-initiated proposal that would effectively end a decade of grade secrecy at BusinessWeek's No. 3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top 25% of each class -- in effect outing everyone else. It was motivated, says Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing effects" of grade nondisclosure, which he says faculty blame for lackluster academic performance and student disengagement.
    "Campus Confidential:   Four top-tier B-schools don't disclose grades. Now that policy is under attack," Business Week, September 12, 2005 --- http://snipurl.com/BWSept122

    Too Good to Grade:  How can these students get into doctoral programs and law schools if their prestigious universities will not disclose grades and class rankings?  Why grade at all in this case?
    Students at some top-ranked B-schools have a secret. It's something they can't share even if it means losing a job offer. It's one some have worked hard for and should be proud of, but instead they keep it to themselves. The secret is their grades.
    At four of the nation's 10 most elite B-schools -- including Harvard, Stanford, and Chicago -- students have adopted policies that prohibit them or their schools from disclosing grades to recruiters. The idea is to reduce competitiveness and eliminate the risk associated with taking difficult courses. But critics say the only thing nondisclosure reduces is one of the most important lessons B-schools should teach: accountability (see BusinessWeek, 9/12/05, "Join the Real World, MBAs"). It's a debate that's flaring up on B-school campuses across the country. (For more on this topic, log on to our B-Schools Forum.)  And nowhere is it more intense than at University of Pennsylvania's Wharton School, where students, faculty, and administrators have locked horns over a school-initiated proposal that would effectively end a decade of grade secrecy at BusinessWeek's No. 3-ranked B-school. It wouldn't undo disclosure rules but would recognize the top 25% of each class -- in effect outing everyone else. It was motivated, says Vice-Dean Anjani Jain in a recent Wharton Journal article, by the "disincentivizing effects" of grade nondisclosure, which he says faculty blame for lackluster academic performance and student disengagement.
    "Campus Confidential:   Four top-tier B-schools don't disclose grades. Now that policy is under attack," Business Week, September 12, 2005 --- http://snipurl.com/BWSept122
    Jensen Comment:  Talk about moral hazard.  What if 90% of the applicants claim to be  straight A graduates at the very top of the class, and nobody can prove otherwise?

    September 2, 2005 message from Denny Beresford [DBeresford@TERRY.UGA.EDU]

    Bob,

    The impression I have (perhaps I'm misinformed) is that most MBA classes result in nearly all A's and B's to students. If that's the case, I wonder how much a grade point average really matters.

    Denny Beresford
     

    September 2, 2005 reply from Bob Jensen

    One of the schools, Stanford, in the 1970s lived with the Van Horn rule that dictated no more than 15% A grades in any MBA class.  I guess grade inflation has hit the top business schools.  Then again, maybe the students are just better than we were.

    I added the following to my Tidbit on this:

    Talk about moral hazard.  What if 90% of the applicants claim to be  straight A graduates at the very top of the class, and nobody can prove otherwise?

    After your message Denny, I see that perhaps it's not moral hazard.  Maybe 90% of the students actually get A grades in these business schools, in which nearly 90% would graduate summa cum laude. 

    What a joke!  It must be nice teaching students who never hammer you on teaching evaluations because you gave them a C or below.

    The crucial quotation is "faculty blame for lackluster academic performance and student disengagement."  Isn't this a laugh if they all get A and B grades for "lackluster academic performance and student disengagement."

    I think these top schools are simply catering to their customers!

     Bob Jensen

    Harvard Business School Eliminates Ban on a Graduate's Discretionary Disclosure of Grades
    The era of the second-year slump at Harvard Business School is over. Or maybe the days of student cooperation are over. Despite strong student opposition, the business school announced Wednesday that it was ending its ban on sharing grades with potential employers. Starting with new students who enroll in the fall, M.B.A. candidates can decide for themselves whether to share their transcripts. The ban on grade-sharing has been enormously popular with students since it was adopted in 1998. Supporters say that it discouraged (or at least kept to a reasonable level) the kind of cut-throat competition for which business schools are known. With the ban, students said they were more comfortable helping one another or taking difficult courses. But a memo sent to students by Jay O. Light, the acting dean, said that the policy was wrong. “Fundamentally, I believe it is inappropriate for HBS to dictate to students what they can and cannot say about their grades during the recruiting process. I believe you and your classmates earn your grades and should be accountable for them, as you will be accountable for your performance in the organizations you will lead in the future,” he wrote.
    Scott Jaschik, "Survival of the Fittest MBA," Inside Higher Ed, December 16, 2005 --- http://www.insidehighered.com/news/2005/12/16/grades

    Bob Jensen's threads on Controversies in Higher Education are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

     


    Software for faculty and departmental performance evaluation and management

    May 30, 2006 message from Ed Scribner [escribne@NMSU.EDU]

    A couple of months ago I asked for any experiences with systems that collect faculty activity and productivity data for multiple reporting needs (AACSB, local performance evaluation, etc.). I said I'd get back to the list with a summary of private responses.

    No one reported any significant direct experience, but many AECMers provided names and e-mail addresses of [primarily] associate deans who had researched products from Sedona and Digital Measures. Since my associate dean was leading the charge, I just passed those addresses on to her.

    We ended up selecting Digital Measures mainly because of our local faculty input, the gist of which was that it had a more professional "feel." My recollection is that the risk of data loss with either system is acceptable and that the university "owns" the data. I understand that a grad student is entering our data from the past five years to get us started.

    Ed Scribner
    New Mexico State University
    Las Cruces, NM, USA

    Jensen Comment
    The Digital Measures homepage is at http://www.digitalmeasures.com/

    Over 100 universities use Digital Measures' customized solutions to connect administrators, faculty, staff, students, and alumni. Take a look at a few of the schools and learn more about Digital Measures.


    Free from the Huron Consulting Group (Registration Required) --- http://www.huronconsultinggroup.com/

    Effort Reporting Technology for Higher Education ---
    http://www.huronconsultinggroup.com/uploadedFiles/ECRT_email.pdf

    Question Mark (Software for Test and Tutorial Generation and Networking)
    Barron's Home Page
    Metasys Japan Software
    Question Mark America home page
    Using ExamProc for OMR Exam Marking
    Vizija d.o.o. - Educational Programs - Wisdom Tools
    Yahoo Links

    TechKnowLogia --- http://www.techknowlogia.org/ 
    TechKnowLogia is an international online journal that provides policy makers, strategists, practitioners and technologists at the local, national and global levels with a strategic forum to:
    Explore the vital role of different information technologies (print, audio, visual and digital) in the development of human and knowledge capital;
    Share policies, strategies, experiences and tools in harnessing technologies for knowledge dissemination, effective learning, and efficient education services;
    Review the latest systems and products of technologies of today, and peek into the world of tomorrow; and
    Exchange information about resources, knowledge networks and centers of expertise.

    Bob Jensen's threads on education technologies are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm


    "What's the Best Q&A Site?" by Wade Roush, MIT's Technology Review, December 22, 2006 --- http://www.technologyreview.com/InfoTech/17932/ 

    Magellan Metasearch --- http://sourceforge.net/projects/magellan2/ 

    Many educators would like to put more materials on the web, but they are concerned about protecting access to all or parts of documents.  For example, a professor may want to share a case with the world but limit the accompanying case solution to selected users.  Or a professor may want to make certain lecture notes available but limit the access of certain copyrighted portions to students in a particular course.   If protecting parts of your documents is of great interest, you may want to consider NetCloak from Maxum at http://www.maxum.com/ .  You can download a free trial version.

    NetCloak Professional Edition combines the power of Maxum's classic combo, NetCloak and NetForms, into a single CGI application or WebSTAR API plug-in. With NetCloak Pro, you can use HTML forms on your web site to create or update your web pages on the fly. Or you can store form data in text files for importing into spreadsheets or databases off-line. Using NetCloak Pro, you can easily create online discussion forums, classified ads, chat systems, self-maintaining home pages, frequently-asked-question lists, or online order forms!

    NetCloak Pro also gives your web site access to e-mail. Users can send e-mail messages via HTML forms, and NetCloak Pro can create or update web pages whenever an e-mail message is received by any e-mail address. Imagine providing HTML archives of your favorite mailing lists in minutes!

    NetCloak Pro allows users to "cloak" pages individually or "cloak" individual paragraphs or text strings.  The level of security seems to be much higher than scripted passwords such as scripted passwords in JavaScript or VBScript.

    Eric Press led me to http://www.maxum.com/NetCloak/FAQ/FAQList.html   (Thank you Eric, and thanks for the "two lunches")

    Richard Campbell responded as follows:

    Alternatives to using Netcloak: 1. Symantec http://www.symantec.com  has a free utility called Secret which will password-protect any type of file.

    2. Winzip http://www.winzip.com  has a another shareware utility called Winzip - Self-Extractor, which has a password protect capability. The advantage to this approach is that you can bundle different file types (.doc, xls) , zip them and you can have them automatically install to a folder that you have named. If you have a shareware install utility that creates a setup.exe routine, you also can have it install automatically on the student's machine. The price of this product is about $30.

     


    Full Disclosure to Consumers of Higher Education (including assessment of colleges and the Spellings Commission Report) --- http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure


    Dropping a Bomb on Accreditation
    The most provocative vision for changing accreditation put forward at Tuesday’s meeting came from Robert C. Dickeson, president emeritus of the University of Northern Colorado. Dickeson’s presentation was loaded with irony, in some ways; a position paper he wrote in 2006 as a consultant to Margaret Spellings’s Commission on the Future of Higher Education was harshly critical of the current system of accreditation (calling it rife with conflicts of interest and decidedly lacking in transparency) and suggested replacing the regional accrediting agencies with a “national accreditation foundation” that would establish national standards for colleges to meet. Dickeson’s presentation Tuesday acknowledged that there remained legitimate criticisms of accreditation’s rigor and agility, noting that many colleges and accrediting agencies still lacked good information about student learning outcomes “40 years after the assessment movement began in higher education.”
    Doug Lederman, "Whither Accreditation," Inside Higher Ed, January 28, 2009 --- http://www.insidehighered.com/news/2009/01/28/accredit

    Dickerson's 2006 Position Paper "Dropping a Bomb on Accreditation" --- http://insidehighered.com/news/2006/03/31/accredit


    Here’s something that may be useful when assessing a doctoral program. Note to key items listed near the end of the document.

    From the Chronicle of Higher Education, November 7, 2008 ---
    http://chronicle.com/weekly/v55/i11/11a00104.htm?utm_source=wb&utm_medium=en

     

    "Ohio State Gets Jump on Doctoral Evaluations," by David Glenn, Chronicle of Higher Education, November 7, 2008 --- http://chronicle.com/weekly/v55/i11/11a00104.htm?utm_source=wb&utm_medium=en

    Provosts around the country are anticipating — and some are surely dreading — the long afternoons when they will go over national rankings data for their graduate departments. No later than this winter, after many delays, the National Research Council plans to release its assessments of American doctoral programs.

    Student-faculty ratios, time to degree, stipends, faculty research productivity, and citation counts: Those numbers and many others will be under national scrutiny.

    But one university couldn't wait. Last year, prodded by anxious faculty members worried about low Ph.D. production, Ohio State University conducted a thorough review of its doctoral programs, drawing heavily on data that its departments had compiled for the council's questionnaire. The Ohio State experience provides a window on what may be coming nationally.

    The evaluations had teeth. Of the 90 doctoral programs at Ohio State, five small ones were tagged as "candidates for disinvestment or elimination": comprehensive vocational education (a specialty track in the college of agriculture), soil science, welding engineering, rehabilitation services, and technology education. Another 29 programs were instructed to reassess or restructure themselves.

    Some programs got good news, however. Twenty-nine that were identified as "high quality" or "strong" will share hundreds of thousands of dollars in new student-fellowship subsidies.

    Many faculty members say the assessments provided a long-overdue chance for Ohio State to think strategically, identifying some fields to focus on and others that are marginal. But the process has also had its share of bumps. The central administration concluded that certain colleges, notably the College of Biological Sciences, were too gentle in their self-reports. And some people have complained that the assessments relied too heavily on "input" variables, such as students' GRE scores.

    Despite those concerns, the dean of Ohio State's Graduate School, Patrick S. Osmer, says the assessment project has exceeded his expectations. He hopes it can serve as a model for what other institutions can do with their doctoral data. "The joy of working here," he says, "is that we're trying to take a coordinated, logical approach to all of these questions, to strengthen the university."

    A Faculty Mandate

    The seeds of the assessment project were planted in 2005, when a high-profile faculty committee issued a report warning that Ohio State was generating proportionally fewer Ph.D.'s than were the other Big Ten universities. "The stark fact is that 482 Ph.D. degrees ... granted in 2003-4 is far below the number expected from an institution the size and (self-declared) quality of OSU," the report read. (The 482 figure excluded doctorates awarded by Ohio State's college of education.) At the University of Wisconsin at Madison, for example, each tenure-track faculty member generated an average of 0.4 Ph.D.'s each year. At Ohio State, the figure was only 0.267.

    The committee recommended several steps: Give the central administration more power in graduate-level admissions. Organize stipends, fellowships, and course work in ways that encourage students to complete their doctorates in a timely manner. Stop giving doctoral-student subsidies to students who are likely to earn only master's degrees. And distribute subsidies from the central administration on a strategic basis, rewarding the strongest programs and those with the most potential for improvement.

    "One thing that motivated all of this," says Paul Allen Beck, a professor of political science and a former dean of social and behavioral sciences at Ohio State, "was a feeling that the university had not invested enough in Ph.D. education. Our universitywide fellowships were not at a competitive level. We really felt that we should try to do a better job of concentrating our university investments on the very best programs."

    Ohio State officials had hoped to use the National Research Council's final report itself for their evaluations. But after its release was postponed for what seemed like the sixth or seventh time, they moved forward without it.

    In September 2007, Mr. Osmer asked the deans of Ohio State's 18 colleges to report data about their doctoral students' median time to degree, GRE scores, stipends, fellowships, job-placement outcomes, and racial and ethnic diversity.

    Many of those numbers were easy to put together, because departments had compiled them during the previous year in response to the council's questionnaire. But job placements — a topic that will not be covered in the NRC report — were something that certain Ohio State programs had not previously tracked.

    "This was a huge new project for us and for some of our departments as well," says Julie Carpenter-Hubin, director of institutional research and planning. "But simply going around and talking to faculty took care of most of it. It's really remarkable the degree to which faculty members stay in touch with their former doctoral students and know where they are. I think we wound up with placement data for close to 80 percent of our Ph.D. graduates, going 10 years back."

    Defending Their Numbers

    The reports that Ohio State's colleges generated last fall contained a mixture of quantitative data — most prominently GRE scores and time-to-degree numbers — and narrative arguments about their departments' strengths. The College of Social and Behavioral Sciences, for example, noted that several recent Ph.D.s in economics, political science, and psychology had won tenure-track positions at Ivy League institutions.

    When they had to report poor-looking numbers, departments were quick to cite reasons and contexts. The anthropology program said its median time to degree of 7.3 years might seem high when compared with those of other degree courses, but is actually lower than the national average for anthropology students, who typically spend years doing fieldwork. Economics said its retention-and-completion rate, which is less than 50 percent, might look low but is comparable to those in other highly ranked economics departments, where students are often weeded out by comprehensive exams at the end of the first year.

    In April 2008, a committee appointed and led by Mr. Osmer, the graduate-school dean, digested the colleges' reports and issued a report card, ranking the 90 doctoral programs in six categories. (See table on following page.)

    The panel did not meekly accept the colleges' self-evaluations. The College of Biological Sciences, for example, had reported that it lacked enough data to draw distinctions among its programs. But the committee's report argued, among other things, that the small program in entomology appeared to draw relatively little outside research support, and that its students had lower GRE scores than those in other biology programs. (Entomology and all other doctoral programs in biology were among the 29 programs that Mr. Osmer's committee deemed in need of reassessment or restructuring.)

    The report's points about entomology — and about the general organization of the college — were controversial among the faculty members, says Matthew S. Platz, a professor of chemistry who became interim dean of biological sciences in July. But faculty members have taken the lead in developing new designs for the college, he says, to answer many of the central administration's concerns.

    "I'm delighted by the fact that at the grass-roots level, faculty members have been talking about several types of reorganization," Mr. Platz says. "And I'm hopeful that two or three of them will be approved by the end of the year."

    'Unacceptably Low Quality'

    The five doctoral degrees named as candidates for the ax have also stirred controversy.

    Jerry M. Bigham, a professor of soil science and director of Ohio State's School of Environment and Natural Resources, says he was disappointed but not entirely surprised by the committee's suggestion that his program could be terminated. The soil-science program has existed on its own only since 1996; before that it was one of several specializations offered by the doctoral program in agronomy.

    "In essence, we've had students and faculty members spread across three programs," he says. So he understands why the university might want to place soil sciences under a larger umbrella, in order to reduce overhead and streamline the administration.

    At the same time, he says, several people were offended by the Osmer committee's blunt statement that soil-science students are of "unacceptably low quality."

    The panel's analysis of the students' GRE scores was "just a snapshot, and I think it really has to be viewed with caution," Mr. Bigham says. "Even though we're a small program, our students have won university fellowships and have been recognized for their research. So I would really object to any characterization of our students as being weak."

    The final verdict on the five programs is uncertain. The colleges that house them might propose folding them into larger degree courses. Or they might propose killing them outright. All such proposals, which are due this fall, are subject to approval by the central administration.

    Jason W. Marion, president of the university's Council of Graduate Students, says its members have generally supported the doctoral-assessment project, especially its emphasis on improving stipends and fellowships. But some students, he adds, have expressed concern about an overreliance on GRE scores at the expense of harder-to-quantify "output" variables like job-placement outcomes.

    Mr. Osmer replies that job placement actually has been given a great deal of weight. "Placing that alongside the other variables really helped our understanding of these programs come together," he says.

    At this summer's national workshop sessions of the Council of Graduate Schools, Mr. Osmer was invited to lecture about Ohio State's assessment project and to discuss how other institutions might make use of their own National Research Council data. William R. Wiener, a vice provost at Marquette University who also spoke on Mr. Osmer's panel, calls the Ohio State project one example of how universities are becoming smarter about assessments.

    "Assessments need to have reasonable consequences," Mr. Wiener says. "I think more universities realize that they need to create a culture of assessment, and that improving student learning needs to permeate everything that we do."

    Mr. Beck, the former social-sciences dean at Ohio State, says that even for relatively strong departments — his own political-science department was rated "high quality" by Mr. Osmer's committee — a well-designed assessment process can be eye-opening.

    "These programs just kind of float along, guided by their own internal pressures," says Mr. Beck. But "the departments here were forced to take a hard look at themselves, and they sometimes saw things that they didn't like."

    HOW OHIO STATE U. RATES DOCTORAL PROGRAMS

    Until recently, Ohio State University used a simple, quantity-based formula to distribute student-support money to its doctoral programs. In essence, the more credit hours taken by students in a program each quarter, the more money the program collected. But last year the university introduced quality-control measures. It used them to make choices about which programs to invest in — and, more controversially, which ones to eliminate.

    Measures used:

    • Students' time to degree Students' GRE scores
    • Graduates' job placements, 1996-2005 Student diversity
    • The program's share of Ph.D. production (both nationally and among Ohio State's peers)
    • "Overall program quality and centrality to the university's mission"

    Resulting ratings:

    • High quality: 12 programs
    • Strong: 17 programs
    • Good: 16 programs
    • New and/or in transition; cannot be fully assessed: 11 programs
    • Must reassess and/or restructure: 29 programs
    • Candidates for disinvestment or elimination: 5 programs

    What the ratings mean:

    • Programs rated "high quality" and "strong" will share new funds from the central administration for graduate-student stipends.
    • "Good" programs have been asked to make improvements in specific areas. Their support will not significantly change.
    • Colleges with doctoral programs that were deemed in need of reassessment or restructuring were asked to submit new strategic plans this fall. Those plans are subject to approval by Ohio State's provost.
    • The new strategic plans will also deal with programs deemed candidates for disinvestment or elimination. Those programs might be folded into larger degree courses, or killed outright.

     Bob Jensen's threads on assessment are at http://faculty.trinity.edu/rjensen/assess.htm


    "Minnesota Colleges Seek Accountability by the Dashboard Light," by Paul Basken, Chronicle of Higher Education, June 18, 2008 ---
    http://chronicle.com/daily/2008/06/3423n.htm

    When your car starts sputtering, it's easy to look at the dashboard and see if you're running out of gas. What if you could do the same with your local college?

    Minnesota's system of state colleges and universities believes it can show the way.

    After two years of preparation, the 32-college system unveiled on Tuesday its new Accountability Dashboard. The service is based on a Web site that displays a series of measures—tuition rates, graduates' employment rates, condition of facilities—that use speedometer-type gauges to show exactly how the Minnesota system and each of its individual colleges is performing.

    The idea is in response to the growing demand, among both policy makers and the public, for colleges to provide more useful and accessible data about how well they are doing their jobs.

    "There's a great call across the country for accountability and transparency, and I don't think it's going to go away," said James H. McCormick, chancellor of the 374,000-student system. "It's just a new way of doing business."

    Shining a Light

    The information in the new format was already publicly available. But its presentation in the dashboard format, along with comparisons with statewide and national figures as well as the system's own goals, will put pressure on administrators and faculty members for improvement, Mr. McCormick and other state education officials told reporters.

    "The dashboard shines a light on where we need to improve," said Ruth Grendahl, vice chairman of the Board of Trustees of the Minnesota State Colleges and Universities.

    Among the areas the dashboard already indicates as needing improvement is the cost of attending Minnesota's state colleges. The gauges for tuition and fees at all 30 of the system's two-year institutions show needles pointing to "needs attention," a reflection of the fact that their costs are higher than those of 80 percent of their peers nationwide.

    The dashboard shows the system faring better in other areas, such as licensure-examination pass rates and degree-completion rates, in which the average figures are in the "meets expectations" range. Other measures, like "innovation" and "student engagement," don't yet show results, as the necessary data are still being collected or the criteria have not yet been defined.

    Tool of Accountability

    Many private companies already use dashboard-type displays in their computer systems to help monitor business performance, but the data typically serve an internal function rather than being a tool for public accountability.

    The Minnesota dashboard stems in part from the system's work through the National Association of System Heads, or NASH, on a project to improve the education of minority and low-income students. The project is known as Access to Success.

    Continued in article

    Jensen Comment
    Those in my generation might appreciate the fact that this car has a "NASH" dashboard. The problem is that when a car's dashboard signals troubles such as oil leaks and overheating, owner's can easily trade in or junk a clunker automobile. This is not so simple in the politics of state universities.


    May 2, 2008 message from Carolyn Kotlas [kotlas@email.unc.edu]

    REPORT ON E-LEARNING RETURNS ON INVESTMENT

    "Within the academic community there remains a sizable proportion of sceptics who question the value of some of the tools and approaches and perhaps an even greater proportion who are unaware of the full range of technological enhancements in current use. Amongst senior managers there is a concern that it is often difficult to quantify the returns achieved on the investment in such technologies. . . . JISC infoNet, the Association for Learning Technology (ALT) and The Higher Education Academy were presented with the challenge of trying to make some kind of sense of the diversity of current e-learning practice across the sector and to seek out evidence that technology-enhanced learning is delivering tangible benefits for learners, teachers and institutions."

    The summary of the project is presented in the recently-published report, "Exploring Tangible Benefits of e-Learning: Does Investment Yield Interest?" Some benefits were hard to measure and quantify, and the case studies were limited to only sixteen institutions. However, according to the study, there appears to be "clear evidence" of many good returns on investment in e-learning. These include improved student pass rates, improved student retention, and benefits for learners with special needs.

    A copy of the report is available at

    http://www.jiscinfonet.ac.uk/publications/camel-tangible-benefits.pdf

    A two-page briefing paper is available at http://www.jisc.ac.uk/media/documents/publications/bptangiblebenefitsv1.pdf

    JISC infoNet, a service of the Joint Information Systems Committee, "aims to be the UK's leading advisory service for managers in the post-compulsory education sector promoting the effective strategic planning, implementation and management of information and learning technology." For more information, go to http://www.jiscinfonet.ac.uk/

    Association for Learning Technology (ALT), formed in 1993, is "the leading UK body bringing together practitioners, researchers, and policy makers in learning technology." For more information, go to http://www.alt.ac.uk/

    The mission of The Higher Education Academy, owned by two UK higher education organizations (Universities UK and GuildHE), is to "help institutions, discipline groups, and all staff to provide the best possible learning experience for their students." For more information, go to http://www.heacademy.ac.uk/

    Bob Jensen's threads on asynchronous learning are at http://faculty.trinity.edu/rjensen/255wp.htm
    Also see http://faculty.trinity.edu/rjensen/265wp.htm

    Assessment Issues --- http://faculty.trinity.edu/rjensen/assess.htm

    Threads on Costs and Instructor Compensation (somewhat outdated) --- http://faculty.trinity.edu/rjensen/distcost.htm

    Bob Jensen's education technology threads are linked at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

     


    Question
    Guess which parents most strongly object to grade inflation?

    Hint: Parents Say Schools Game System, Let Kids Graduate Without Skills

    The Bredemeyers represent a new voice in special education: parents disappointed not because their children are failing, but because they're passing without learning. These families complain that schools give their children an easy academic ride through regular-education classes, undermining a new era of higher expectations for the 14% of U.S. students who are in special education. Years ago, schools assumed that students with disabilities would lag behind their non-disabled peers. They often were taught in separate buildings and left out of standardized testing. But a combination of two federal laws, adopted a quarter-century apart, have made it national policy to hold almost all children with disabilities to the same academic standards as other students.
    John Hechinger and Daniel Golden, "Extra Help:  When Special Education Goes Too Easy on Students," The Wall Street Journal, August 21, 2007, Page A1 ---  http://online.wsj.com/article/SB118763976794303235.html?mod=todays_us_page_one

    Bob Jensen's threads on grade inflation are at http://faculty.trinity.edu/rjensen/Assess.htm#GradeInflation

    Bob Jensen's fraud updates are at http://faculty.trinity.edu/rjensen/FraudUpdates.htm


    Question
    What Internet sites help you compare neighboring K-12 schools?

    "Grading Neighborhood Schools: Web Sites Compare A Variety of Data, Looking Beyond Scores," by Katherine Boehret, The Wall Street Journal, February 20, 2008; Page D6 ---

    I performed various school queries using Education.com Inc., GreatSchools Inc.'s GreatSchools.net and SchoolMatters.com by typing in a ZIP Code, city, district or school name. Overall, GreatSchools and Education.com offered the most content-packed environments, loading their sites with related articles and offering community feedback on education-related issues by way of blog posts or surveys. And though GreatSchools is 10 years older than Education.com, which made its debut in June, the latter has a broader variety of content and considers its SchoolFinder feature -- newly available as of today -- just a small part of the site.

    Both Education.com and GreatSchools.net base a good portion of their data on information gathered by the Department of Education and the National Center for Education Statistics, the government entity that collects and analyzes data related to education.

    SchoolMatters.com, a service of Standard & Poor's, is more bare-bones, containing quick statistical comparisons of schools. (S&P is a unit of McGraw-Hill Cos.) This site gets its content from various sources, including state departments of education, private research firms, the Census and National Public Education Finance Survey. This is evidenced by lists, charts and pie graphs that would make Ross Perot proud. I learned about where my alma mater high school got its district revenue in 2005: 83% was local, 15% was state and 2% was federal. But I couldn't find district financial information for more recent years on the site.

    All three sites base at least some school-evaluation results on test scores, a point that some of their users critique. Parents and teachers, alike, point out that testing doesn't always paint an accurate picture of a school and can be skewed by various unacknowledged factors, such as the number of students with disabilities.

    Education.com's SchoolFinder feature is starting with roughly 47,000 schools in 10 states: California, Texas, New York, Florida, Illinois, Pennsylvania, Ohio, Michigan, New Jersey and Georgia. In about two months, the site hopes to have data for all states, totaling about 60,000 public and charter schools. I was granted early access to SchoolFinder, but only Michigan was totally finished during my testing.

    SchoolFinder lets you narrow your results by type (public or charter), student-to-teacher ratio, school size or Adequate Yearly Progress (AYP), a measurement used to determine each school's annual progress. Search results showed specific details on teachers that I didn't see on the other sites, such as how many teachers were fully credentialed in a particular school and the average years of experience held by a school's teachers.

    The rest of the Education.com site contains over 4,000 articles written by well-known education sources like the New York University Child Study Center, Reading is Fundamental and the Autism Society of America. It also contains a Web magazine and a rather involved discussion-board community where members can ask questions of like-minded parents and the site's experts, who respond with advice and suggestions of articles that might be helpful.

    Private schools aren't required to release test scores, student or teacher statistics, so none of the sites had as much data on private schools. However, GreatSchools.net at least offered basic results for most private-school queries that I performed, such as a search for Salesianum School in Delaware (where a friend of mine attended) that returned the school's address, a list of the Advanced Placement exams it offered from 2006 to 2007 and six rave reviews from parents and former students.

    GreatSchools.net makes it easy to compare schools, even without knowing specific names. After finding a school, I was able to easily compare that school with others in the geographic area or school district -- using a chart with numerous results on one screen. After entering my email address, I saved schools to My School List for later reference.

    I couldn't find each school's AYP listed on GreatSchools.net, though these data were on Education.com and SchoolMatters.com.

    SchoolMatters.com doesn't provide articles, online magazines or community forums. Instead, it spits out data -- and lots of it. A search for "Philadelphia" returned 324 schools in a neat comparison chart that could, with one click, be sorted by grade level, reading test scores, math test scores or students per teacher. (The Julia R. Masterman Secondary School had the best reading and math test scores in Philadelphia, according to the site.)

    SchoolMatters.com didn't have nearly as much user feedback as Education.com or GreatSchools.net. But stats like a school's student demographics, household income distribution and the district's population age distribution were accessible thanks to colorful pie charts.

    These three sites provide a good overall idea of what certain schools can offer, though GreatSchools.net seems to have the richest content in its school comparison section. Education.com excels as a general education site and will be a comfort to parents in search of reliable advice. Its newly added SchoolFinder, while it's in early stages now, will only improve this resource for parents and students.


    May 2, 2007 message from Carnegie President [carnegiepresident@carnegiefoundation.org]

    A different way to think about ... accountability Alex McCormick's timely essay brings to our attention one of the most intriguing paradoxes associated with high-stakes measurement of educational outcomes. The more importance we place on going public with the results of an assessment, the higher the likelihood that the assessment itself will become corrupted, undermined and ultimately of limited value. Some policy scholars refer to the phenomenon as a variant of "Campbell's Law," named for the late Donald Campbell, an esteemed social psychologist and methodologist. Campbell stated his principle in 1976: "The more any quantitative social indicator is used for social decisionmaking, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."

    In the specific case of the Spellings Commission report, Alex points out that the Secretary's insistence that information be made public on the qualities of higher education institutions will place ever higher stakes on the underlying measurements, and that very visibility will attenuate their effectiveness as accountability indices. How are we to balance the public's right to know with an institution's need for the most reliable and valid information? Alex McCormick's analysis offers us another way to think about the issue.

    Carnegie has created a forum—Carnegie Conversations—where you can engage publicly with the author and read and respond to what others have to say about this article at http://www.carnegiefoundation.org/perspectives/april2007 .

    Or you may respond to Alex privately through carnegiepresident@carnegiefoundation.org .

    If you would like to unsubscribe to Carnegie Perspectives, use the same address and merely type "unsubscribe" in the subject line of your email to us.

    We look forward to hearing from you.

    Sincerely,

    Lee S. Shulman
    President The Carnegie Foundation for the Advancement of Teaching

    Jensen Comment
    The fact that an assessment provides incentives to cheat is not a reason to not assess. The fact that we assign grades to students gives them incentives to cheat. That does not justify ceasing to assess, because the assessment process is in many instances the major incentive for a student to work harder and learn more. The fact that business firms have to be audited and produce financial statements provides incentives to cheat. That does not justify not holding business firms accountable. Alex McCormick's analysis and Shulman's concurrence is a bit one-sided in opposing the Spellings Commission recommendations.

    Also see Full Disclosure to Consumers of Higher Education at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#FullDisclosure


    School Assessment and College Admission Testing

    July 25, 2006 query from Carol Flowers [cflowers@OCC.CCCD.EDU]

    I am looking for a study that I saw. I was unsure if someone in this group had supplied the link, originally. It was a very honest and extremely comprehensive evaluation of higher education. In it, the

    Higher Education Evaluation and Research Group was constantly quoted. But, what organizations it is affiliated with, I am unsure.

    They commented on the lack of student academic preparedness in our educational system today along with other challenging areas that need to be addressed inorder to serve the population with which we now deal.

    If anyone remembers such a report, please forward to me the url.

    Thank You!

    July 25, 2006 reply from Bob Jensen

    Hi Carol,

    I think the HEERG is affiliated with the Chancellor's Office of the California Community Colleges. It is primarily focused upon accountability  and assessment of these colleges.

    HEERG --- http://snipurl.com/HEERG

    Articles related to your query include the following:

    Leopards in the Temple --- http://www.insidehighered.com/views/2006/06/12/caesar  

    Accountability, Improvement and Money --- http://www.insidehighered.com/views/2005/05/03/lombardi

    Grade Inflation and Abdication --- http://www.insidehighered.com/views/2005/06/03/lombardi

    Students Read Less. Should We Care? --- http://www.insidehighered.com/views/2005/08/23/lombardi

    Missing the Mark: Graduation Rates and University Performance --- http://www.insidehighered.com/views/2005/02/14/lombardi2


    Assessment of Learning Achievements of College Graduates

    "Getting the Faculty On Board," by Freeman A. Hrabowski III, Inside Higher Ed, June 23, 2006 --- http://www.insidehighered.com/views/2006/06/23/hrabowski

    But as assessment becomes a national imperative, college and university leaders face a major challenge: Many of our faculty colleagues are skeptical about the value of external mandates to measure teaching and learning, especially when those outside the academy propose to define the measures. Many faculty members do not accept the need for accountability, but the assessment movement’s success will depend upon faculty because they are responsible for curriculum, instruction and research. All of us — policy makers, administrators and faculty — must work together to develop language, strategies and practices that help us appreciate one another and understand the compelling need for assessment — and why it is in the best interest of faculty and students.

    Why is assessment important? We know from the work of researchers like Richard Hersh, Roger Benjamin, Mark Chun and George Kuh that college enrollment will be increasing by more than 15 percent nationally over the next 15 years (and in some states by as much as 50 percent). We also know that student retention rates are low, especially among students of color and low-income students. Moreover, of every 10 children who start 9th grade, only seven finish high school, five start college, and fewer than three complete postsecondary degrees. And there is a 20 percent gap in graduation rates between African Americans (42 percent) and whites (62 percent). These numbers are of particular concern given the rising higher education costs, the nation’s shifting demographics, and the need to educate more citizens from all groups.

    At present, we do not collect data on student learning in a systematic fashion and rankings on colleges and universities focus on input measures, rather than on student learning in the college setting. Many people who have thought about this issue agree: We need to focus on “value added” assessment as an approach to determine the extent to which a university education helps students develop knowledge and skills. This approach entails comparing what students know at the beginning of their education and what they know upon graduating. Such assessment is especially useful when large numbers of students are not doing well — it can and should send a signal to faculty about the need to look carefully at the “big picture” involving coursework, teaching, and the level of support provided to students and faculty.

    Many in the academy, however, continue to resist systematic and mandated assessment in large part because of problems they see with K-12 initiatives like No Child Left Behind — e.g., testing that focuses only on what can be conveniently measured, unacceptable coaching by teachers, and limiting what is taught to what is tested. Many academics believe that what is most valuable in the college experience cannot be measured during the college years because some of the most important effects of a college education only become clearer some time after graduation. Nevertheless, more institutions are beginning to understand that value-added assessment can be useful in strengthening teaching and learning, and even student retention and graduation rates.

    It is encouraging that a number of institutions are interested in implementing value-added assessment as an approach to evaluate student progress over time and to see how they compare with other institutions. Such strategies are more effective when faculty and staff across the institution are involved. Examples of some best practices include the following:

    1. Constantly talking with colleagues about both the challenges and successful initiatives involving undergraduate education.
    2. Replicating successful initiatives (best practices from within and beyond the campus), in order to benefit as many students as possible.
    3. Working continuously to improve learning based on what is measured — from advising practices and curricular issues to teaching strategies — and making changes based on what we learn from those assessments.
    4. Creating accountability by ensuring that individuals and groups take responsibility for different aspects of student success.
    5. Recruiting and rewarding faculty who are committed to successful student learning (including examining the institutional reward structure).
    6. Taking the long view by focusing on initiatives over extended periods of time — in order to integrate best practices into the campus culture.

    We in the academy need to think broadly about assessment. Most important, are we preparing our students to succeed in a world that will be dramatically different from the one we live in today? Will they be able to think critically about the issues they will face, working with people from all over the globe? It is understandable that others, particularly outside the university, are asking how we demonstrate that our students are prepared to handle these issues.

    Assessment is becoming a national imperative, and it requires us to listen to external groups and address the issues they are raising. At the same time, we need to encourage and facilitate discussions among our faculty — those most responsible for curriculum, instruction, and research — to grapple with the questions of assessment and accountability. We must work together to minimize the growing tension among groups — both outside and inside the university — so that we appreciate and understand different points of view and the compelling need for assessment.

    Bob Jensen's threads on controversies in higher education are at http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

    NCLB = No Child Left Behind Law
    A September 2007 Thomas B. Fordham Institute report found NCLB's assessment system "slipshod" and characterized by "standards that are discrepant state to state, subject to subject, and grade to grade." For example, third graders scoring at the sixth percentile on Colorado's state reading test are rated proficient. In South Carolina the third grade proficiency cut-off is the sixtieth percentile.
    Peter Berger, "Some Will Be Left Behind," The Irascible Professor, November 10, 2007 --- http://irascibleprofessor.com/comments-11-10-07.htm


    "This is Only a Test," by Peter Berger, The Irascible Professor, December 5, 2005 --- http://irascibleprofessor.com/comments-12-05-05.htm

    Back in 2002 President Bush predicted "great progress" once schools began administering the annual testing regime mandated by No Child Left Behind. Secretary of Education Rod Paige echoed the President's sentiments. According to Mr. Paige, anyone who opposed NCLB testing was guilty of "dismissing certain children" as "unteachable."

    Unfortunately for Mr. Paige, that same week The New York Times documented "recent" scoring errors that had "affected millions of students" in "at least twenty states." The Times report offered a pretty good alternate reason for opposing NCLB testing. Actually, it offered several million pretty good alternate reasons.

    Here are a few more.

    There's nothing wrong with assessing what students have learned. It lets parents, colleges, and employers know how our kids are doing, and it lets teachers know which areas need more teaching. That's why I give quizzes and tests and one of the reasons my students write essays.

    Of course, everybody who's been to school knows that some teachers are tougher graders than others. Traditional standardized testing, from the Iowa achievement battery to the SATs, was supposed to help us gauge the value of one teacher's A compared to another's. It provided a tool with which we could compare students from different schools.

    This works fine as long as we recognize that all tests have limitations. For example, for years my students took a nationwide standardized social studies test that required them to identify the President who gave us the New Deal. The problem was the seventh graders who took the test hadn't studied U.S. history since the fifth grade, and FDR usually isn't the focus of American history classes for ten-year-olds. He also doesn't get mentioned in my eighth grade U.S. history class until May, about a month after eighth graders took the test.

    In other words, wrong answers about the New Deal only meant we hadn't gotten there yet. That's not how it showed up in our testing profile, though. When there aren't a lot of questions, getting one wrong can make a surprisingly big difference in the statistical soup.

    Multiply our FDR glitch by the thousands of curricula assessed by nationwide testing. Then try pinpointing which schools are succeeding and failing based on the scores those tests produce. That's what No Child Left Behind pretends to do.

    Testing fans will tell you that cutting edge assessments have eliminated inconsistencies like my New Deal hiccup by "aligning" the tests with new state of the art learning objectives and grade level expectations. The trouble is these newly minted goals are often hopelessly vague, arbitrarily narrow, or so unrealistic that they're pretty meaningless. That's when they're not obvious and the same as they always were.

    New objectives also don't solve the timing problem. For example, I don't teach poetry to my seventh grade English students. That's because I know that their eighth grade English teacher does an especially good job with it the following year, which means that by the time they leave our school, they've learned about poetry. After all, does it matter whether they learn to interpret metaphors when they're thirteen or they're fourteen as long as they learn it?

    Should we change our program, which matches our staff's expertise, just to suit the test's arbitrary timing? If we don't, our seventh graders might not make NCLB "adequate yearly progress." If we do, our students likely won't learn as much.

    Which should matter more?

    Even if we could perfectly match curricula and test questions, modern assessments would still have problems. That's because most are scored according to guidelines called rubrics. Rubric scoring requires hastily trained scorers, who typically aren't teachers or even college graduates, to determine whether a student's essay "rambles" or "meanders." Believe it or not, that choice represents a twenty-five percent variation in the score. Or how about distinguishing between "appropriate sentence patterns" and "effective sentence structure," or language that's "precise and engaging" versus "fluent and original."

    These are the flip-a-coin judgments at the heart of most modern assessments. Remember that the next time you read about which schools passed and which ones failed.

    Unreliable scoring is one reason the General Accountability Office condemned data "comparisons between states" as "meaningless." It's why CTB/McGraw-Hill had to recall and rescore 120,000 Connecticut writing tests after the scores were released. It's why New York officials discarded the scores from its 2003 Regents math exam. A 2001 Brookings Institution study found that "fifty to eighty percent of the improvement in a school's average test scores from one year to the next was temporary" and "had nothing to do with long-term changes in learning or productivity." A senior RAND analyst warned that today's tests aren't identifying "good schools" and "bad schools." Instead, "we're picking out lucky and unlucky schools."

    Students aren't the only victims of faulty scoring. Last year the Educational Testing Service conceded that more than ten percent of the candidates taking its 2003-2004 nationwide Praxis teacher licensing exam incorrectly received failing scores, which resulted in many of them not getting jobs. ETS attributed the errors to the "variability of human grading."

    The New England Common Assessment Program, administered for NCLB purposes to all students in Vermont, Rhode Island, and New Hampshire, offers a representative glimpse of the cutting edge. NECAP is heir to all the standard problems with standardized test design, rubrics, and dubiously qualified scorers.

    NECAP security is tight. Tests are locked up, all scrap paper is returned to headquarters for shredding, and testing scripts and procedures are painstakingly uniform. Except on the mathematics exam, each school gets to choose if its students can use calculators.

    Whether or not you approve of calculators on math tests, how can you talk with a straight face about a "standardized" math assessment if some students get to use them and others don't? Still more ridiculous, there's no box to check to show whether you used one or not, so the scoring results don't even differentiate between students and schools that did and didn't.

    Finally, guess how NECAP officials are figuring out students' scores. They're asking classroom teachers. Five weeks into the year, before we've even handed out a report card to kids we've just met, we're supposed to determine each student's "level of proficiency" on a twelve point scale. Our ratings, which rest on distinguishing with allegedly statistical accuracy between "extensive gaps," "gaps," and "minor gaps," are a "critical piece" and "key part of the NECAP standard setting process."

    Let's review. Because classroom teachers' grading standards aren't consistent enough from one school to the next, we need a standardized testing program. To score the standardized testing program, every teacher has to estimate within eight percentage points how much their students know so test officials can figure out what their scores are worth and who passed and who failed.

    If that makes sense to you, you've got a promising future in education assessment. Unfortunately, our schools and students don't.


    "College Board Asks Group Not to Post Test Analysis," by Diana Jean Schemol, The New York Times, December 4, 2004 --- http://www.nytimes.com/2004/12/04/education/04college.html?oref=login 

    The College Board, which owns the SAT college entrance exam, is demanding that a nonprofit group critical of standardized tests remove from its Web site data that breaks down scores by race, income and sex.

    The demand, in a letter to The National Center for Fair and Open Testing, also known as FairTest, accuses the group of infringing on the College Board's copyright.

    "Unfortunately, your misuse overtly bypasses our ownership and significantly impacts the perceptions of students, parents and educators regarding the services we provide," the letter said.

    The move by the College Board comes amid growing criticism of the exams, with more and more colleges and universities raising questions about their usefulness as a gauge of future performance and discarding them as requirements for admission. The College Board is overhauling parts of the exam and will be using a new version beginning in March

    FairTest has led opposition to the exams, and releases the results to support its accusation of bias in the tests, a claim rejected by test makers, who contend the scores reflect true disparities in student achievement. FairTest posts the information in easily accessible charts, and Robert A. Schaeffer, its spokesman, said they were the Web site's most popular features.

    In its response to the College Board letter, which FairTest posted on its Web site on Tuesday, the group said it would neither take down the data nor seek formal permission to use it. FairTest has been publicly showing the data for nearly 20 years, Mr. Schaeffer said, until now without objection from the testing company, which itself releases the data in annual reports it posts on its Web site.

    "You can't copyright numbers like that," Mr. Schaeffer said. "It's all about public education and making the public aware of score gaps and the potential for bias in the exams."

    Devereux Chatillon, a specialist on copyright law at Sonnenschein, Nath & Rosenthal in New York, said case law supported FairTest's position. "Facts are not copyrightable," Ms. Chatillon said. In addition, she said, while the College Board may own the exam, the real authors of the test results are those taking the exams.

    Continued in article

    2004 Senior Test Scores:  ACT --- http://www.fairtest.org/nattest/ACT%20Scores%202004%20Chart.pdf 

    2004 Senior Test Scores:  SAT --- http://www.fairtest.org/nattest/SAT%20Scoresn%202004%20Chart.pdf 

    Fair Test Reacts to the SAT Outcomes --- http://www.fairtest.org/univ/2004%20SAT%20Score%20Release.html 

    Fair Test Home --- http://www.fairtest.org/ 

    Jensen Comment:
    If there is to be a test that sets apart students that demonstrate higher ability, motivation, and aptitude for college studies, how would it differ from the present Princeton tests that have been designed and re-designed over and over again?  I cannot find any Fair Test models of what such a test would look like.  One would assume that by its very name Fair Test still agrees that some test is necessary.   However, the group's position seems to be that no national test is feasible that will give the same means and standard deviations for all groups (males, females, and race categories).  Fair Test advocates "assessments based on students' actual performances, not one-shot, high-stakes exams."  

    Texas has such a Fair Test system in place for admission to any state university.  The President of the University of Texas, however, wants the system to be modified since his top-rated institution is losing all of its admission discretion and may soon be overwhelmed with more admissions than can be seated in classrooms.  My module on this issue, which was a special feature on 60 Minutes from CBS, is at http://faculty.trinity.edu/rjensen/book04q4.htm#60Minutes 

    The problem with performance-based systems (such as the requirement that any state university in Texas must accept any graduate in the top 10% of the graduating class from any Texas high school) is that high schools in the U.S. generally follow the same grading scale as Harvard University.  Most classes give over half the students A grades.  Some teachers give A grades just for attendance or effort apart from performance.  This means that when it comes to isolating the top 10% of each graduating class, we're talking in terms of Epsilon differences.  I hardly think Epsilon is a fair criterion for admission to college.  Also, as was pointed out on 60 Minutes, students with 3.9 grade averages from some high schools tend to score much lower than students with 3.0 grade averages from other high schools.  This might achieve better racial mix but hardly seems fair to the 3.0 student who was unfortunate enough to live near a high school having a higher proportion of top students.   That was the theme of the 60 Minutes CBS special contrasting a 3.9 low SAT student who got into UT versus a 3.0 student who had a high SAT but was denied admission to UT.

    What we really need is to put more resources into fair chances for those who test poorly or happen to fall Epsilon below that hallowed 10% cut off. in a performance-based system.  This may entail more time and remedial effort on the part of students before or after entering college.  


    Mount Holyoke Dumps the SAT
    Mount Holyoke College, which decided in 2001 to make the SAT optional, is finding very little difference in academic performance between students who provided their test scores and those who didn't.  The women's liberal arts college is in the midst of one of the most extensive studies to date about the impact of dropping the SAT -- a research project financed with $290,000 from the Mellon Foundation.  While the study isn't complete, the college is releasing some preliminary results. So far, Mount Holyoke has found that there is a difference of 0.1 point in the grade-point average of those who do and do not submit SAT scores. That is equivalent to approximately one letter grade in one course over a year of study.  Those results are encouraging to Mount Holyoke officials about their decision in 2001.
    Scott Jaschik, "Not Missing the SAT," Inside Higher Ed March 9, 2005 --- http://www.insidehighered.com/insider/not_missing_the_sat 
    Jensen Comment:
    These results differ from the experiences of the University of Texas system where grades and test scores differ greatly between secondary schools.   Perhaps Mount Holyoke is not getting applications from students in the poorer school districts.  See http://faculty.trinity.edu/rjensen/book04q4.htm#60Minutes 


    Dangers of Self Assessment

    My undergraduate students can’t accurately predict their academic performance or skill levels. Earlier in the semester, a writing assignment on study styles revealed that 14 percent of my undergraduate English composition students considered themselves “overachievers.” Not one of those students was receiving an A in my course by midterm. Fifty percent were receiving a C, another third was receiving B’s and the remainder had earned failing grades by midterm. One student wrote, “overachievers like myself began a long time ago.” She received a 70 percent on her first paper and a low C at midterm.
    Shari Wilson, "Ignorant of Their Ignorance," Inside Higher Ed, November 16, 2006 --- http://www.insidehighered.com/views/2006/11/16/wilson
    Jensen comment
    This does not bode well for self assessment.

    Do middle-school students understand how well they actually learn?
    Given national mandates to ‘leave no child behind,’ grade-school students are expected to learn an enormous amount of course material in a limited amount of time. “Students have too much to learn, so it’s important they learn efficiently,” says Dr. John Dunlosky, Kent State professor of psychology and associate editor of Journal of Experimental Psychology: Learning, Memory and Cognition. Today, students are expected to understand and remember difficult concepts relevant to state achievement tests. However, a major challenge is the student’s ability to judge his own learning. “Students are extremely over confident about what they’re learning,” says Dunlosky. Dunlosky and his colleague, Dr. Katherine Rawson, Kent State assistant professor of psychology, study metacomprehension, or the ability to judge your own comprehension and learning of text materials. Funded by the U.S. Department of Education, their research primarily focuses on fifth, seventh and eighth graders as well as college-aged students, and how improving metacomprehension can, in turn, improve students’ self-regulated learning.
    PhysOrg, November 26, 2007 --- http://physorg.com/news115318315.html


    Competency-Based Assessment


    Question
    What are two early adopters of competency-based education in distance education courses?

    Undergraduate Program Answer:  Western Governors University (WGU)
    Graduate Program Answer:  Chartered Accountancy School of Business (CASB) in Western Canada
    See http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    Question
    How do the University of Chicago (in the 1900s), and the 21st Century University of Wisconsin, University of Akron, and Southern New Hampshire University competency-based differ from the WGU and CASB programs?

    Answer
    The WGU and CASB only administer competency-based testing for students enrolled in distance education courses.
    The other universities mentioned provide(d) transcript credits without enrolling in courses.

    "Competency-Based Education Goes Mainstream in Wisconsin," by Scott Carlson, Chronicle of Higher Education, September 30, 2013 ---
    http://chronicle.com/article/Competency-Based-Education/141871/?cid=wc&utm_source=wc&utm_medium=en

    Twenty years ago, Aaron Apel headed off to the University of Wisconsin at Platteville, where he spent too little time studying and too much time goofing off. He left the university, eventually earning an associate degree in information technology at a community college.

    Now, as a longtime staff member in the registrar's office at Wisconsin's Madison campus, he has advanced as far as his education will let him. "I have aspirations to climb the ladder in administration, but the opportunity isn't there without a four-year degree," he says.

    Spending months in a classroom is out of the question: In addition to his full-time job, he helps his wife run an accounting business, shuttles three kids to activities, and oversees an amateur volleyball league. Now he may have another option. Later this year Wisconsin's extension system will start a competency-based learning program, called the Flexible Option, in which students with professional experience and training in certain skills might be able to test out of whole courses on their way to getting a degree.

    Competency-based learning is already famously used by private institutions like Southern New Hampshire University and Western Governors University, but Wisconsin will be one of the first major public universities to take on this new, controversial form of granting degrees. Among the system's campuses, Milwaukee was first to announce bachelor's degrees in nursing, diagnostic imaging, and information science and technology, along with a certificate in professional and business communication. UW Colleges, made up of the system's two-year institutions, is developing liberal-arts-oriented associate degrees. The Flex Option, as it's often called, may cost the Wisconsin system $35-million over the next few years, with half of that recovered through tuition. The system is starting with a three-month, all-you-can-learn term for $2,250.

    If done right, the Flex Option could help a significant number of adults acquire marketable skills and cross the college finish line—an important goal in Wisconsin, which lags behind neighboring states in percentage of adults with college diplomas. There are some 800,000 people in the state who have some college credits but no degree—among them Wisconsin Gov. Scott Walker, who dropped out of Marquette University. He had pushed the university system to set up the Flex Option early last year, when he was considering inviting Western Governors to the state to close a statewide skills gap in high-demand fields like health care, information technology, and advanced manufacturing.

    "Students in general are learning in very different ways," the governor, a Republican, says in an interview. The state's population of adults with some college but no degree constitutes "a target-rich environment for us to find the new engineers, health-care professionals, and IT experts that we need to fill these jobs, so we don't have to recruit them from elsewhere and we don't have to wait for years for undergraduates."

    But if it's designed poorly, the program will confirm perceptions held by some faculty members, who already thought that the governor's policies were hostile to higher education. They worry that the Flex Option will turn the University of Wisconsin into a kind of diploma mill or suck resources from a system that is already financially pressured. Faculty at the Green Bay campus passed a resolution to express "doubts that the Flexible degree program will meet the academic standards of a university education."

    "It's an intriguing idea, but I think the questions that need to be asked are what are the serious limitations of it," says Eric Kraemer, a philosophy professor at the La Crosse campus, where faculty members were also highly skeptical of the Flex Option. Mr. Kraemer wonders whether there actually is a significant group of Wisconsin adults who have the initiative and ability to test out of big portions of degree programs. And, particularly in a squishier subject area like the humanities, he wonders whether testing can adequately evaluate what a traditional student would glean through time and effort spent in a course. "I have serious doubts about the effectiveness of simply doing a competency test to determine whether someone can actually think on their feet."

    Certainly, there are a lot of details to be worked out, even as the Flexible Option prepares to enroll its first students. Some of the challenges are technical or logistical: Wisconsin's extension program will have to spend millions to create a student-information system flexible enough to work in a new environment, where student progress is tracked not by course time but competencies, and where instruction and assessment are decoupled.

    Continued in article


    "Innovations in Higher Education? Hah! College leaders need to move beyond talking about transformation before it's too late," by Ann Kirschner, Chronicle of Higher Education, April 8, 2012 ---
    http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en

    . . .

    (Conclusion)
    Some of the most interesting work begins in the academy but grows beyond it. "Scale" is not an academic value—but it should be. Most measures of prestige in higher education are based on exclusivity; the more prestigious the college, the larger the percentage of applicants it turns away. Consider the nonprofit Khan Academy, with its library of more than 3,000 education videos and materials, where I finally learned just a little about calculus. In the last 18 months, Khan had 41 million visits in the United States alone. It is using the vast data from that audience to improve its platform and grow still larger. TED, the nonprofit devoted to spreading ideas, just launched TED-Ed, which uses university faculty from around the world to create compelling videos on everything from "How Vast Is the Universe?" to "How Pandemics Spread." Call it Khan Academy for grown-ups. The Stanford University professor Sebastian Thrun's free course in artificial intelligence drew 160,000 students in more than 190 countries. No surprise, the venture capitalists have come a-calling, and they are backing educational startups like Udemy and Udacity.

    All of those are signposts to a future where competency-based credentials may someday compete with a degree.

    At this point, if you are affiliated with an Ivy League institution, you'll be tempted to guffaw, harrumph, and otherwise dismiss the idea that anyone would ever abandon your institution for such ridiculous new pathways to learning. You're probably right. Most institutions are not so lucky. How long will it take for change to affect higher education in major ways? Just my crystal ball, but I would expect that institutions without significant endowments will be forced to change by 2020. By 2025, the places left untouched will be few and far between.

    Here's the saddest fact of all: It is those leading private institutions that should be using their endowments and moral authority to invest in new solutions and to proselytize for experimentation and change, motivated not by survival but by the privilege of securing the future of American higher education.

    The stakes are high. "So let me put colleges and universities on notice," President Obama said in his recent State of the Union address. "If you can't stop tuition from going up, the funding you get from taxpayers will go down." Because of the academy's inability to police itself and improve graduation rates, and because student debt is an expedient political issue, the Obama administration recently threatened to tie colleges' eligibility for campus-based aid programs to institutions' success in improving affordability and value for students.

    Whether the president's threat is fair or not, it will not transform higher education. Change only happens on the ground. Despite all the reasons to be gloomy, however, there is room for optimism. The American university, the place where new ideas are born and lives are transformed, will eventually focus that lens of innovation upon itself. It's just a matter of time.

     

    Jensen Comment
    This a long and important article for all educators to carefully read. Onsite colleges have always served many purposes, but one purpose they never served is to be knowledge fueling stations where students go to fill their tanks. At best colleges put a shot glass of fuel in a tanks with unknown capacities.

    Students go to an onsite college for many reasons other than to put fuel in their knowledge tanks. The go to live and work in relatively safe transitional environments between home and the mean streets. They go to mature, socialize, to mate, drink, laugh, leap over hurdles societies place in front of career paths, etc. The problem in the United States is that college onsite living and education have become relatively expensive luxuries. Students must now make more painful decisions as to how much to impoverish their parents and how deeply go into debt.

    I have a granddaughter 22 years old majoring in pharmacy (six year program). She will pay off her student loans before she's 50 years old if she's lucky. Some older students who've not been able to pay off their loans are becoming worried that the Social Security Administration will garnish their retirement Social Security monthly payments for unpaid student loans.

    We've always known that colleges are not necessary places for learning and scholarship. Until 43 years ago (when the Internet was born) private and public libraries were pretty darn necessary for scholarship. Now the Internet provides access to most known knowledge of the world.  But becoming a scholar on the Internet is relatively inefficient and overwhelming without the aid of distillers of knowledge, which is where onsite and online college courses can greatly add to efficiency of learning.

    But college courses can be terribly disappointing as distillers of knowledge. For one thing, grade inflation disgracefully watered down the amount of real fuel in that shot glass of knowledge provided in a college course ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#GradeInflation
    Grades rather than learning became the tickets to careers and graduate schools, thereby, leading to street-smart cheating taking over for real learning perspiration ---
    http://faculty.trinity.edu/rjensen/Plagiarism.htm

    When 80% of Harvard's graduating class graduates cum laude, we no longer identify which graduates are were the best scholars in their class.

    Soon those graduates from Harvard, Florida A&M University, Capella University, and those who learned on their own from free courses, video lectures, and course materials on the Web will all face some sort of common examinations (written and oral) of their competencies in specialties. Competency testing will be the great leveler much like licensure examinations such as the Bar Exam, the CPA exam, the CFA exam, etc. are graded on the basis of what you know rather than where you learned what you know. It won't really matter whether you paid a fortune to learn Bessel Functions onsite at MIT or for free from the MITx online certificate program ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

    If you are an educator or are becoming an educator, please read:
    "Innovations in Higher Education? Hah! College leaders need to move beyond talking about transformation before it's too late," by Ann Kirschner, Chronicle of Higher Education, April 8, 2012 ---
    http://chronicle.com/article/Innovations-in-Higher/131424/?sid=wc&utm_source=wc&utm_medium=en 


    Competency-Based Assessment --- http://faculty.trinity.edu/rjensen/competency.htm

    There are a few really noteworthy competency-based distance education programs including Western Governors University (WGU) and the Chartered Accountancy School of Business (CASB)  in Canada. But these compentency-based programs typically have assigned instructors and bear the costs of those instructors. The instructors, however, do not assign grades to students.

    It appears that the Southern New Hampshire University (a private institution) is taking competency-based distance education to a new level by eliminating the instructors. It should be noted that SNHU has both an onsite campus and online degree programs.

    "Online Education Is Everywhere. What’s the Next Big Thing?" by Marc Parry, Chronicle of Higher Education, August 31, 2011 ---
    http://chronicle.com/blogs/wiredcampus/online-education-is-everywhere-whats-the-next-big-thing/32898?sid=wc&utm_source=wc&utm_medium=en

    . . .

    The vision is that students could sign up for self-paced online programs with no conventional instructors. They could work at their own speeds through engaging online content that offers built-in assessments, allowing them to determine when they are ready to move on. They could get help through networks of peers who are working on the same courses; online discussions could be monitored by subject experts. When they’re ready, students could complete a proctored assessment, perhaps at a local high school, or perhaps online. The university’s staff could then grade the assessment and assign credit.

    And the education could be far cheaper, because there would be no expensive instructor and students could rely on free, open educational resources rather than expensive textbooks. Costs to the student might include the assessment and the credits.

    “The whole model hinges on excellent assessment, a rock-solid confidence that the student has mastered the student-learning outcomes,” the memo says. “If we know with certainty that they have, we should no longer care if they raced through the course or took 18 months, or if they worked on their courses with the support of a local church organization or community center or on their own. The game-changing idea here is that when we have assessment right, we should not care how a student achieves learning. We can blow up the delivery models and be free to try anything that shows itself to work.”

    Continued in article

    Jensen Comment
    In its early history, the University of Chicago had competency-based programs where grades were assigned solely on the basis of scores on final examinations. Students did not have to attend class.

    Bob Jensen's threads on competency-based assessment ---
    http://faculty.trinity.edu/rjensen/competency.htm 

    Bob Jensen's threads on distance education alternatives are at
    http://faculty.trinity.edu/rjensen/Crossborder.htm

    Bob Jensen's threads on higher education controversies are at
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

    I should point out that this is very similar to the AAA's Innovation in Accounting Education Award Winning BAM Pedagogy commenced at the University of Virginia (but there were instructors who did not teach) ---
    http://faculty.trinity.edu/rjensen/265wp.htm


    "College Degree, No Class Time Required University of Wisconsin to Offer a Bachelor's to Students Who Take Online Competency Tests About What They Know," by Caroline Porter, The Wall Street Journal, January 24, 2013 --- "
    http://online.wsj.com/article/SB10001424127887323301104578255992379228564.html
    Thank you Ramesh Fernando for the heads up.

    David Lando plans to start working toward a diploma from the University of Wisconsin this fall, but he doesn't intend to set foot on campus or even take a single online course offered by the school's well-regarded faculty.

    Instead, he will sit through hours of testing at his home computer in Milwaukee under a new program that promises to award a bachelor's degree based on knowledge—not just class time or credits.

    "I have all kinds of credits all over God's green earth, but I'm using this to finish it all off," said the 41-year-old computer consultant, who has an associate degree in information technology but never finished his bachelor's in psychology.

    Colleges and universities are rushing to offer free online classes known as "massive open online courses," or MOOCs. But so far, no one has figured out a way to stitch these classes together into a bachelor's degree.

    Now, educators in Wisconsin are offering a possible solution by decoupling the learning part of education from student assessment and degree-granting.

    Wisconsin officials tout the UW Flexible Option as the first to offer multiple, competency-based bachelor's degrees from a public university system. Officials encourage students to complete their education independently through online courses, which have grown in popularity through efforts by companies such as Coursera, edX and Udacity.

    No classroom time is required under the Wisconsin program except for clinical or practicum work for certain degrees.

    Elsewhere, some schools offer competency-based credits or associate degrees in areas such as nursing and business, while Northern Arizona University plans a similar program that would offer bachelor's degrees for a flat fee, said spokesman Eric Dieterle. But no other state system is offering competency-based bachelor's degrees on a systemwide basis.

    Wisconsin's Flexible Option program is "quite visionary," said Molly Corbett Broad, president of the American Council on Education, an education policy and lobbying group that represents some 1,800 accredited colleges and universities.

    In Wisconsin, officials say that about 20% of adult residents have some college credits but lack a degree. Given that a growing number of jobs require a degree, the new program appeals to potential students who lack the time or resources to go back to school full time.

    "It is a big new idea in a system like ours, and it is part of the way the ground is shifting under us in higher education," said Kevin Reilly, president of the University of Wisconsin System, which runs the state's 26 public-university campuses.

    Under the Flexible Option, assessment tests and related online courses are being written by faculty who normally teach the related subject-area classes, Mr. Reilly said.

    Officials plan to launch the full program this fall, with bachelor's degrees in subjects including information technology and diagnostic imaging, plus master's and bachelor's degrees for registered nurses. Faculty are working on writing those tests now.

    The charges for the tests and related online courses haven't been set. But university officials said the Flexible Option should be "significantly less expensive" than full-time resident tuition, which averages about $6,900 a year at Wisconsin's four-year campuses.

    The Wisconsin system isn't focusing on the potential cost savings the program may offer it but instead "the university and the state are doing this to strengthen the state work force," said university spokesman David Giroux.

    Siva Vaidhyanathan, a media-studies professor at the University of Virginia who has written about the future of universities, called the program a "worthy experiment" but warned that school officials "need to make sure degree plans are not watered down."

    Some faculty at the school echoed the concern, since the degree will be indistinguishable from those issued by the University of Wisconsin the traditional way. "There has got to be very rigorous documentation that it lives up to the quality of that name," said Mark Cook, an animal-sciences professor and chairman of the university committee for the faculty senate at the Madison campus.

    Wisconsin Gov. Scott Walker has championed the idea, in part because he left college in his senior year for a job opportunity and never finished his degree. He said he hoped to use the Flexible Degree option himself.

    "I think it is one more way to get your degree. I don't see it as replacing things," Mr. Walker said

    Continued in article

    Jensen Comment
    If competency based learning is to be offered in this manner, I think the pretense that this is equivalent to a traditional undergraduate degree should be dropped. An undergraduate diploma traditionally maps to a curriculum that includes some courses that just cannot be examined with competency-based testing proposed in this article. This includes speech courses where students must stand in front of audiences to perform and be evaluated. This includes case courses where the student's oral contributions to oral discussions of a case, discussions that take on  serendipitous tracks and student interactions. Science laboratories and many other courses entail use of onsite equipment, chemicals, etc. Some physical education courses entail individual and team performances. Music courses often entail performances on musical instruments or singing before critics. Education courses often entail live teaching and other interactions with K-12 students.

    In between we have online universities that still make students take courses and interact with instructors and other students by email, chat rooms, etc. A few like Western Governors University even have course grades based on competency-based testing. But WGU only offers certain majors that do not entail onsite laboratory experiences and other onsite experiences. In the 19th Century the University of Chicago allowed students to take final examinations in some courses without attending any classes.  But this did not apply to all types of courses available on campus.

    The day will probably come where there are no undergraduate or graduate degrees. Students will instead have transcript records of their graded performances onsite and online. But that day has not yet arrived. The above University of Wisconsin alternative to obtaining an undergraduate diploma must be severely limited in terms of the total curriculum available onsite at state university campuses in Wisconsin.

    The above University of Wisconsin alternative to obtaining an online diploma cuts out important parts of online learning in a course where students frequently interact with instructors and other students enrolled in class.

    Bob Jensen's threads on the dark side of education technology ---
    http://faculty.trinity.edu/rjensen/000aaa/theworry.htm

    Bob Jensen's threads on assessment are at
    http://faculty.trinity.edu/rjensen/Assess.htm


    Update on the Roaring Online Nonprofit Western Governors University (WGU) founded in 1997 by the governors of 19 states
    A competency-based university where instructors don't assign the grades --- grades are based upon competency testing
    WGU does not admit foreign students
    WGU now has over 30,000 students from sponsoring states for this nonprofit, private university

    Western Governors University (WGU) --- http://en.wikipedia.org/wiki/WGU

    Competency-Based Learning --- http://faculty.trinity.edu/rjensen/Assess.htm#ConceptKnowledge

    The article below is about WGU-Texas which was "founded" in 2011 when Texas joined the WGU system
    "Reflections on the First Year of a New-Model University," by Mark David Milliron, Chronicle of Higher Education, October 1, 2012 ---
    http://chronicle.com/article/Reflections-on-the-First-Year/134670/?cid=wc&utm_source=wc&utm_medium=en

    Western Governors University Texas, where I am chancellor, is not an easy institution to describe to your mother—or even your hip sister. It just doesn't fit the profile of most traditional universities, even the newer for-profit and online ones. It brings the work of a national, online, nonprofit university into a state, and it embraces a competency-based education model that is rarely found on an institutionwide level.

    Even for seasoned educators, WGU Texas feels different. And in a year that has seen flat or declining enrollments at many traditional colleges, reports critical of for-profit institutions, and continuing debate over the perils and promise of online learning, our story, and our growth, has been unique. As we hit our one-year anniversary, it's worth taking a few moments to reflect on the ups, downs, challenges, and champions of this newest state model. I'd offer three key reflections on lessons we've learned:

    Building a strong foundation. Western Governors was founded as a private, multistate online university 15 years ago by governors of Western states. Texas is only the third state model within the system, following WGU Indiana and WGU Washington. Before our opening, leaders of Western Governors took time to make sure the idea of this state university made sense for Texas. The intent was to add high-quality, affordable capacity to the state's higher-education system, particularly for adult learners, and to localize it for Texans and their employers.

    This outpost was poised to "go big" in one of the biggest of states, offering more than 50 bachelor's and master's degrees in high-demand fields in business, education, information technology, and health professions. WGU's online-learning model allows students to progress by demonstrating what they know and can do rather than by logging time in class accumulating credit hours.

    In meetings across the state, the idea of WGU Texas gained the support of the state's political, legislative, and higher-education leaders, as well as the Texas Workforce Commission and the Texas Association of Community Colleges. Rushing to roll out was not the goal; entering the education ecosystem with solid support of the model was.

    I came on board as chancellor in December 2011. Having served on WGU's Board of Trustees for six years, I knew the model, and having graduated from and worked for the University of Texas at Austin, I knew Texas.

    In the past six months, we have hired key staff and faculty, formed a state advisory board, opened a main office and training center in downtown Austin, launched our first wave of student outreach, begun working with employers in different metro regions, and started connecting online and on the ground with students. After absorbing WGU's 1,600 existing Texas students, WGU Texas grew by more than 60 percent in this first year, entering August 2012 with more than 3,000 students.

    In about eight weeks, we'll hold our first commencement in Austin, celebrating the graduation of more than 400 students. We're moving quickly now, but it's the firm foundation of outreach, support, and systems that served us well as we took on the next two challenges:

    Confronting conflation. WGU Texas is laser-focused on a student population that is typically underserved. We see ourselves as a good fit for adult learners who need an affordable, quality, and flexible learning model, particularly working students who want to attend full time. We are especially focused on the more than three million Texans who have some college and no credential—students like Jason Franklin, a striving adult learner in a high-demand IT field who had gone as far as he could in his career without a degree. He earned a bachelor's and a master's degree through Western Governors, and is now working on a master's degree from WGU Texas.

    We'd like to help these students reach their goals and get on a solid career and lifelong-learning path.

    However, in offering a new model like ours, you quickly find the conflation problem a challenge. Some assume that you're trying to compete for the fresh-from-high-school graduates who want a campus experience. Others assume that because you're online, you must be a for-profit university. Still others put all online education programs in the same bucket, not distinguishing at all between a traditional model online and a deeply personalized, competency-based learning model.

    Fighting conflation by clearly differentiating and properly positioning our university has been essential. We've had to be clear—and to repeat often—that our approach is designed for adult learners who have some college and work experience. We're absolutely OK with telling prospective students, partner colleges, and state-policy leaders that for 18- to 20-year-olds looking to embark on their first college experience, we are probably not the right fit. In fact, first-time freshmen make up less than 5 percent of our student population.

    The for-profit conflation has been even more interesting. Many people assume that any online university is for-profit. We are not. And even when we assure them that our nonprofit status keeps us deeply committed to low tuition—we have a flat-rate, six-month-term tuition averaging less than $3,000 for full-time students, which our national parent WGU has not raised for four years—they have a hard time getting their minds around it.

    Others are sure we are nothing more than an online version of the traditional model, relying entirely on adjunct faculty. When we explain our history, learning model, and reliance on full-time faculty members who specialize in either mentoring or subject matter, it takes some time. But once people embrace the idea of a personal faculty mentor who takes a student from first contact to crossing the graduation stage, they warm quickly to the model.

    Synching with the state's needs. While forming the foundation and fighting conflation are important, I'd say the key to WGU's state-model successes is the commitment to synching with the economic, educational, and student ecosystem of the state.

    On the economic level, we've been able to work directly with employers eager to support our university, advance our competency-centered model, and hire our graduates. Educationally we have been fortunate to have smart and strategic partners that have guided our entry into the state. For example, our Finish to Go Further transfer program, in partnership with the Texas community-college association, motivates students to complete their associate degrees before transferring. This strategy supports the goal of the Texas Higher Education Coordinating Board of significantly improving postsecondary access and success in Texas.

    Continued in article

    Bob Jensen's threads on assessment (including competency-based assessment) ---
    http://faculty.trinity.edu/rjensen/Assess.htm

    Jensen Comment
    WGU is neither a traditional university nor a MOOC. It started as an experiment to deliver a quality education without having the 19 states have to build and/or maintain physical campuses to deliver college education to more students. Admittedly, one of the main incentives was to expand learning opportunities without paying for the enormous costs of building and maintaining campuses. WGU was mostly an outreach program for non-traditional students who for one reason or another are unable to attend onsite campuses. But the primary goal of WGU was not and still is not confined to adult education.

    WGU is not intended to take over onsite campus education alternatives. The founders of WGU are well aware that living and learning on an onsite campus brings many important components to education and maturation and socialization that WGU cannot offer online. For example, young students on campus enter a new phase of life living outside the homes and daily oversight of their parents. But the transition is less abrupt than living on the mean streets of real life. Students meet face-to-face on campus and are highly likely to become married or live with students they are attracted to on campus. Campus students can participate in athletics, music performances, theatre performances, dorm life, chapel life, etc.

    But WGU is not a MOOC where 100,000 anonymous students may be taking an online course. Instead, WGU courses are relatively small with intimate communications 24/7 with instructors and other students in most of the courses. In many ways the learning communications may be much closer online in WGU than on campus at the University of Texas where classrooms often hold hundreds of students taking a course.

    There are some types of learning that can take place in live classrooms that are almost impossible online.
    For example, an onsite case analysis class (Harvard style) takes on a life of its own that case instructors cannot anticipate before class. Students are forced to speak out in front of other students. A student's unexpected idea may change the direction of the entire case discussion for the remainder of the class. I cannot imagine teaching many Harvard Business School cases online even though there are ways to draw out innovative ideas and discussions online. Physical presence is part and parcel to teaching many HBS cases.

    Competency-based grading has advantages and disadvantages.
    Competency-based grading removes incentives to brown nose instructors for better grades. It's unforgiving for lazy and unmotivated students. But these advantages can also be disadvantages. Some students become more motivated by hoping that their instructors will reward effort as well as performance. At unexpected points in life those rewards for effort may come at critical times just before a student is apt to give up and look for a full time McJob.

    Some students are apt to become extremely bored learning about Shakespeare or Mozart. But in attempting to please instructors with added effort, the students may actually discover at some unexpected point something wonderful about Shakespeare or Mozart. Mathematics in particular is one of those subjects that can be a complete turn off until suddenly a light clicks and student discovers that math is not only interesting --- math can be easier once you hit a key point in the mathematics learning process. This definitely happened with me, and the light did not shine for me until I started a doctoral program. Quite suddenly I loved mathematics and made it the central component of my five years of full-time doctoral studies at Stanford University.

    Thus WGU and the University of Texas should not be considered competitors. They are different alternatives that have some of the same goals (such as competency in learning content) and some different goals (such as living with other students and participating in extracurricular activities).

    I wish WGU well and hope it thrives alongside the traditional state-supported campuses. WGU in some ways was a precursor to MOOC education, but WGU is not a MOOC in the sense that classes are small and can be highly interactive with other students and with instructor. In a MOOC, students have to be more motivated to learn on their own and master the material without much outside help from other students or instructors.

    There are many ways to teach and many ways to learn. WGU found its niche. There's no one-size-fits-all to living and learning.

    Bob Jensen's threads on higher education controversies ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm

     


    Western Governors University --- http://en.wikipedia.org/wiki/Western_Governors_University
    Instructors do not assign the grades in this successful "competency-based testing university

    A President Brings a Revolutionary University to Prominence," by Goldie Blumenstyk, Chronicle of Higher Education, February 26, 2012 ---
    http://chronicle.com/article/A-President-Brings-a/130915/?sid=wc&utm_source=wc&utm_medium=en

    Western Governors University, first conceived in 1995, embodied an idea that was ahead of its time. And early in its life, that showed.

    Traditional accreditors resisted its model: an all-online, competency-based institution. Experts scoffed at its grandiose promises to reshape higher education. Students, unmoved by its founders' ambitious early enrollment projections, mostly stayed away.

    Yet a Utah technology entrepreneur named Robert W. Mendenhall, who had been asked to kick-start the venture a few years into its existence, says he never doubted. "It took me about 30 seconds to decide I would do it," says Mr. Mendenhall, WGU's president since 1999. "I was always confident that we'd pull it off. The idea made so much sense."

    Today the unusual institution has drawn growing notice from national mainstream news media and at meetings on college affordability by both the U.S. Senate and President Obama. It has a growing student body of more than 25,000 students.

    Mr. Mendenhall, now 57, came to WGU when it had no students and no degrees. "The vision of it was just coagulating," recalls Michael O. Leavitt, the former Utah governor who was instrumental in the institution's founding and in Mr. Mendenhall's hiring.

    With his know-how for building start-up businesses, a practical willingness to shed time-consuming and unpromising components (like a plan to run an online catalog of online courses from other institutions), and what Mr. Leavitt calls a determined "sense of mission" for low-cost, competency-based higher education, Mr. Mendenhall kept the nonprofit institution moving.

    Internally, he was an "in your face" presence, a colleague says, while externally, thanks in no small part to the political backing of 19 governors, he pulled the strings that would eventually land WGU millions in federal grants to develop its online programs and its distinguishing proficiency exams by which students progress toward a degree, and millions more from the Lumina Foundation to create what would become its turning point, a teachers' college.

    Continued in article

    Bob Jensen's threads on competency-based assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment


    Competency-Based College Credit --- http://faculty.trinity.edu/rjensen/Assess.htm#ECA

    "Online Education Is Everywhere. What’s the Next Big Thing?" by Marc Parry, Chronicle of Higher Education, August 31, 2011 ---
    http://chronicle.com/blogs/wiredcampus/online-education-is-everywhere-whats-the-next-big-thing/32898?sid=wc&utm_source=wc&utm_medium=en

    Western Governors University (a nonprofit, competency- based online university) --- http://en.wikipedia.org/wiki/Western_Governors_University
    Also see http://www.wgu.edu/home2

    New Charter University (a for-profit, self-paced, competency-based online university) --- http://en.wikipedia.org/wiki/New_Charter_University

    "No Financial Aid, No Problem. For-Profit University Sets $199-a-Month Tuition for Online Courses," by Marc Parry, Chronicle of Higher Education, March 29, 2012 ---
    http://chronicle.com/article/No-Financial-Aid-No-Problem/131329/?sid=wc&utm_source=wc&utm_medium=en

    It's a higher-education puzzle: Students are flocking to Western Governors University, driving growth of 30 to 40 percent each year. You might expect that competitors would be clamoring to copy the nonprofit online institution's model, which focuses on whether students can show "competencies" rather than on counting how much time they've spent in class.

    So why haven't they?

    Two reasons, says the education entrepreneur Gene Wade. One, financial-aid regulatory problems that arise with self-paced models that aren't based on seat time. And two, opposition to how Western Governors changes the role of professor, chopping it into "course mentors" who help students master material, and graders who evaluate homework but do no teaching.

    Mr. Wade hopes to clear those obstacles with a start-up company, UniversityNow, that borrows ideas from Western Governors while offering fresh twists on the model. One is cost. The for-profit's new venture—New Charter University, led by Sal Monaco, a former Western Governors provost—sidesteps the loan system by setting tuition so cheap that most students shouldn't need to borrow. The price: $796 per semester, or $199 a month, for as many classes as they can finish.

    "This is not buying a house," says Mr. Wade, co-founder and chief executive of UniversityNow. "This is like, do I want to get cable?"

    Another novelty: New Charter offers a try-it-before-you-buy-it platform that mimics the "freemium" model of many consumer Web services. Anyone can create an account and start working through its self-paced online courses free of charge. Their progress gets recorded. If they decide to pay up and enroll, they get access to an adviser (who helps navigate the university) and course specialists (who can discuss the material). They also get to take proctored online tests for course credit.

    The project is the latest in a series of experiments that use technology to rethink the economics of higher education, from the $99-a-month introductory courses of StraighterLine to the huge free courses provided through Stanford and MIT.

    For years, some analysts have argued that ready access to Pell Grants and federal loans actually props up colleges prices, notes Michael B. Horn, executive director for education at Innosight Institute, a think tank focused on innovation. That's because institutions have little incentive to charge anything beneath the floor set by available financial aid.

    "Gene and his team are basically saying, the heck with that—we're going to go around it. We think people can afford it if we offer it at this low a price," Mr. Horn says. "That could be revolutionary."

    Yet the project faces tall hurdles: Will employers value these degrees? Will students sign on? And, with a university that lacks regional accreditation right now­—New Charter is nationally accredited by the Distance Education and Training Council, and is considering seeking regional accreditation—will students be able to transfer its credits?

    Mr. Wade banks on appealing to working adults who crave easier access to education. When asked who he views as the competition, his reply is "the line out the door at community college." In California, where Mr. Wade is based, nearly 140,000 first-time students at two-year institutions couldn't get into any courses at all during the previous academic year, according to a recent Los Angeles Times editorial about the impact of state budget cuts.

    Mr. Wade himself benefited from a first-class education, despite being raised without much money in a housing project in a tough section of Boston. Growing up there, during an era when the city underwent forced busing to integrate its schools, felt like watching a "train wreck" but walking away unscathed. He attended high school at the prestigious Boston Latin School. With assistance from Project REACH, a program to help Boston minorities succeed in higher education, he went to Morehouse College. From there his path included a J.D. from Harvard Law, an M.B.A. from Wharton, and a career as an education entrepreneur.

    The 42-year-old founded two earlier companies: LearnNow, a charter-school-management outfit that was sold to Edison Schools, and Platform Learning, a tutoring firm that served low-income students. So far, he's raised about $8 million from investors for UniversityNow, whose New Charter subsidiary is a rebranded, redesigned, and relocated version of an online institution once called Andrew Jackson University. Breaking a Traditional Mold

    To build the software, Mr. Wade looked beyond the traditional world of educational technology, recruiting developers from companies like Google. Signing up for the university feels more like creating an account with a Web platform like Facebook than the laborious process of starting a traditional program—in fact, New Charter lets you join with your Facebook ID. Students, whether paying or not, start each class by taking an assessment to establish whether they're ready for the course and what material within it they need to work on. Based on that, the system creates a pathway to guide them through the content. They skip stuff that they already know.

    That was part of the appeal for Ruben Fragoso, who signed up for New Charter's M.B.A. program three weeks ago after stumbling on the university while Googling for information about online degrees. Mr. Fragoso, 53, lives in Albuquerque and works full time as a logistics coordinator for a solar power company. The Mexican-born father of two earned a bachelor's degree 12 years ago from Excelsior College. With New Charter, he mostly teaches himself, hunkering down in his home office after dinner to read and take quizzes. By week three, he hadn't interacted with any other students, and his instructor contact had been limited to a welcome e-mail. That was fine by him.

    He likes that he can adjust his schedule to whatever fits—one course at a time if a subject is tough, or maybe three if he prefers. His company's education benefits—up to $5,000 a year—cover the whole thing. With years of business experience, he appreciates the option of heading quickly to a final test on a subject that is familiar to him.

    Continued in article

    US News Rankings --- http://www.usnews.com/rankings

    US News Top Online Education Programs --- http://www.usnews.com/education/online-education
    Do not confuse this with the US News project to evaluate for-profit universities --- a project hampered by refusal of many for-profit universities to provide data

    'Honor Roll' From 'U.S. News' of Online Graduate Programs in Business

    Institution Teaching Practices and Student Engagement Student Services and Technology Faculty Credentials and Training Admissions Selectivity
    Arizona State U., W.P. Carey School of Business 24 32 37 11
    Arkansas State U. 9 21 1 36
    Brandman U. (Part of the Chapman U. system) 40 24 29 n/a
    Central Michigan U. 11 3 56 9
    Clarkson U. 4 24 2 23
    Florida Institute of Technology 43 16 23 n/a
    Gardner-Webb U. 27 1 15 n/a
    George Washington U. 20 9 7 n/a
    Indiana U. at Bloomington, Kelley School of Business 29 19 40 3
    Marist College 67 23 6 5
    Quinnipiac U. 6 4 13 16
    Temple U., Fox School of Business 39 8 17 34
    U. of Houston-Clear Lake 8 21 18 n/a
    U. of Mississippi 37 44 20 n/a

    Source: U.S. News & World Report

    Jensen Comment
    I don't know why the largest for-profit universities that generally provide more online degrees than the above universities combined are not included in the final outcomes. For example, the University of Phoenix alone as has over 600,000 students, most of whom are taking some or all online courses.

    My guess is that most for-profit universities are not forthcoming with the data requested by US News analysts. Note that the US News condition that the set of online programs to be considered be regionally accredited does not exclude many for-profit universities. For example, enter in such for-profit names as "University of Phoenix" or "Capella University" in the "College Search" box at
    http://colleges.usnews.rankingsandreviews.com/best-colleges/university-of-phoenix-20988
    These universities are included in the set of eligible regionally accredited online degree programs to be evaluated. They just did not do well in the above "Honor Roll" of outcomes for online degree programs.

    For-profit universities may have shot themselves in the foot by not providing the evaluation data to US News for online degree program evaluation. But there may b e reasons for this. For example, one of the big failings of most for-profit online degree programs is in undergraduate "Admissions Selectivity."  

    Bob Jensen's threads on distance education training and education alternatives are at
    http://faculty.trinity.edu/rjensen/Crossborder.htm

    Bob Jensen's threads on ranking controversies are at
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#BusinessSchoolRankings

    Bob Jensen's threads on distance education ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#DistanceEducation

    For-Profit Universities Operating in the Gray Zone of Fraud ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud


    Critical Thinking Badges for Brains That Do Not Have Course Content Competency
    "Online Course Provider, StraighterLine, to Offer Critical-Thinking Tests to Students," by Jeff Selingo, Chronicle of Higher Education, January 19, 2012 --- Click Here
    http://chronicle.com/blogs/wiredcampus/online-course-provider-straighterline-to-offer-critical-thinking-tests-to-students/35092?sid=at&utm_source=at&utm_medium=en

    As alternatives to the college diploma have been bandied about recently, one question always seems to emerge: How do you validate badges or individual classes as a credential in the absence of a degree?

    One company that has been hailed by some as revolutionizing introductory courses might have an answer.

    The company, StraighterLine, announced on Thursday that beginning this fall it will offer students access to three leading critical-thinking tests, allowing them to take their results to employers or colleges to demonstrate their proficiency in certain academic areas.

    The tests—the Collegiate Learning Assessment, sponsored by the Council for Aid to Education, and the Proficiency Profile, from the Educational Testing Service—each measure critical thinking and writing, among other academic areas. The iSkills test, also from ETS, measures the ability of a student to navigate and critically evaluate information from digital technology.

    Until now, the tests were largely used by colleges to measure student learning, but students did not receive their scores. That’s one reason that critics of the tests have questioned their effectiveness since students have little incentive to do well.

    Burck Smith, the founder and chief executive of StraighterLine, which offers online, self-paced introductory courses, said on Thursday that students would not need to take classes with StraighterLine in order to sit for the tests. But he hopes that, for students who do take both classes and tests, the scores on the test will help validate StraighterLine courses.

    StraighterLine doesn’t grant degrees and so can’t be accredited. It depends on accredited institutions to accept its credits, which has not always been an easy task for the company.

    “For students looking to get a leg up in the job market or getting into college,” Mr. Smith said, “this will give them a way to show they’re proficient in key academic areas.”

    Jensen Comment

    Jensen Comment

    College diplomas might be obtained in three different scenarios:

    1. Traditional College Courses
      Students take onsite or online courses that are graded by their instructors.
       
    2. Competency-Based College Courses
      Students take onsite or online courses and are then given competency-based examinations.
      Examples include the increasingly popular Western Governors University and the Canada's Chartered Accountancy School of Business (CASB).
      http://faculty.trinity.edu/rjensen/Assess.htm#ComputerBasedAssessment
       
    3. Competency-Based College Courses That Never Meet or Rarely Meet
      Students might study from course materials and videos in classes that do not meet or rarely meet with instructors.
      In the 1900s the University of Chicago gave degrees to students who took only examinations to pass courses.
      In current times BYU teaches the first two accounting courses from variable speed video disks and then administers competency-based examinations.
      The University of New Hampshire now is in the process of developing a degree program for students who only competency-based examinations to pass courses.
      http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#NoInstructors

    Recently, there are increasingly popular certificates of online "attendance" in courses that do not constitute college credits toward diplomas. MIT is providing increasingly popular certificates ---
    "Will MITx Disrupt Higher Education?" by Robert Talbert, Chronicle of Higher Education, December 20, 2011 ---
    http://chronicle.com/blognetwork/castingoutnines/2011/12/20/will-mitx-disrupt-higher-education/?sid=wc&utm_source=wc&utm_medium=en

    MITx Open Sharing Wonder
    "MIT Mints a Valuable New Form of Academic Currency," by Kevin Carey, Chronicle of Higher Education, January 22, 2012 ---
    http://chronicle.com/article/MIT-Mints-a-Valuable-New-Form/130410/?sid=wc&utm_source=wc&utm_medium=en
    There are no admission requirements or prerequisites to enroll in these online courses. Presumably the only tests of competency might be written or oral examinations of potential employers. For example, if knowledge of Bessel Functions is required on the job, a potential employer might determine in one way or another that the student has a competency in Bessel Functions ---
     http://en.wikipedia.org/wiki/Bessel_Functions

    In all the above instances, a student's transcript is based upon course content whether or not the student takes courses and/or competency-based examinations in the content of those courses.

    StraighterLine's new certificates based upon "Critical-Thinking Tests" is an entirely different concept. Presumably the certificates no longer are rooted on knowledge of content. Rather these are certificates based upon critical thinking skills in selected basic courses such as a writing skills course.

    In my opinion these will be a much harder sell in the market. Whereas a potential employer can assess whether an applicant has the requisite skills in something like Bessel Functions, how does an employer or college admissions officer verify that StraightLine's "Critical-Thinking Tests" are worth a diddly crap and, if so, what does passing such tests mean in terms of job skills?

    Thus far I'm not impressed with Critical Thinking Certificates unless they are also rooted on course content apart from "thinking" alone.

    Bob Jensen's threads on the BYU Variable Speed Video Courses ---
    http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#BYUvideo

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm

    Bob Jensen's threads on open sharing courses. lectures, videos, tutorials, and course materials from prestigious universities ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

    Bob Jensen's threads on online training and education alternatives ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm

    Bob Jensen's threads on higher education controversies ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm


    "A Russian University Gets Creative Against Corruption:  With surveillance equipment and video campaigns, rector aims to eliminate bribery at Kazan State," by Anna Nemtsova, Chronicle of Higher Education, January 17, 2010 ---
    http://chronicle.com/article/A-Russian-University-Gets/63522/

    A student walks down the hallway of a university building and, in a stroke of luck, finds a 1,000-ruble bill lying on the floor. As he bends down to grab it, an idea crosses his mind.

    "That is going to be just enough to pay for my exam!" he exclaims.

    Then the figure of a man in a suit blocks the light over the squatting student.

    "No it won't!" the man says, shaking his head.

    In the next moment, the student is literally kicked out of the university, his official file flying down the stairs behind him.

    This bit of melodrama is not an exam-time nightmare, but a video by students at Kazan State University. They are part of an unusual campaign to stamp out corruption on the campus. Too many students and professors have a "pay to play" mentality, reformers say, in which grades and test scores are bought and sold.

    Anticorruption videos are shown daily. Students participate in classroom discussions about the problem. Kazan State's rector, Myakzyum Salakhov, has installed video cameras in every hallway and classroom, so that the security department can watch students and professors in every corner of the university to catch any bribes as they are made.

    "Our job is to change the attitude to corruption at our university, so all students and professors realize that corruption is damaging our system of education, that corruption should be punished," says Mr. Salakhov, who is outspoken, both on campus and off, about the challenges that Russian higher education faces on this front.

    "We are working on creating a new trend on our campus," he says. "Soon every student giving bribes or professor making money on students will feel ashamed."

    Across Russia, bribery and influence-peddling are rife within academe. Critics cite a combination of factors: Poor salaries lead some professors to pocket bribes in order to make ends meet. Students and their families feel they must pay administrators to get into good universities, if only because everyone else seems to be doing it. And local government officials turn a blind eye, sometimes because they, too, are corrupt.

    "Corruption has become a systemic problem, and we therefore need a systemic response to deal with it," Russia's president, Dmitry Medvedev, said last June.

    Last fall a federal law-enforcement operation called Education 2009 reported that law-enforcement officials had uncovered 3,117 instances of corruption in higher education; of those, 1,143 involved bribes. That is a 90-percent increase over the previous year. Law-enforcement agencies prosecuted 265 university employees for taking bribes.

    But while many Russians shrug their shoulders over this news—reports on corruption in higher education are hardly new—Kazan State decided to do something about it.

    The 200-year-old institution in southwestern Russia, which educated Leo Tolstoy and Vladimir Lenin, among others, is considered among the best universities in Russia. It enrolls 14,000 full-time students, most of whom come from the nearby Volga River region of the country.

    Grades for Sale Students and administrators alike say that bribery is rampant on the campus, and that it includes everyone from students to department chairs.

    "Corruption is just a routine we have to deal with," says Alsu Bariyeva, a student activist and journalism major who joined the campaign after a professor in the physical-culture department suggested that she pay him to get credit for her work that semester. She paid.

    Several students said they once saw a list of prices posted in the hallway of the law department. The cost of a good grade on various exams ranged from $50 to $200. Students from other departments report similar scenarios.

    Many people on the campus identify the arrest last March of the head of the general-mathematics department as a turning point. Police, tipped off by students and parents, charged in and arrested Maryan Matveichuk, 61, as he was pocketing thousands of rubles from a student for a good mark on a summer exam.

    The police investigation concluded that in at least six instances Mr. Matveichuk, a respected professor, had accepted bribes of 4,000 to 6,000 rubbles, or about $135 to $200, from students in other departments for good grades on their math exams and courses.

    Last September a court in Kazan found the math professor guilty of accepting a total of 29,500 rubles, or $1,000, in bribes, issued a suspended sentence of three years in prison, and stripped him of his teaching credential.

    Mr. Matveichuk's arrest inspired Mr. Salakhov, the rector, to form an anticorruption committee, including administrators and students.

    "I personally believe that corruption sits in our mentality," Mr. Salakhov says. "With students' help, I found three professors taking bribes and asked them to leave. The committee's job is to crack down on corruption within these walls."

    Constant Surveillance Mr. Salakhov's right-hand man in his fight against corruption is Gennady Sadrislamov, the deputy rector responsible for campus security. A large computer screen on his desk displays images from the cameras placed around the campus.

    A former police colonel whose heavy figure appears in the campus anticorruption videos, Mr. Sadrislamov says students are crucial to the campaign's success.

    "Matveichuk brought shame to our university, but unfortunately, he was not the only one making money on the side," the deputy rector says. "Corruption sits in everybody's head. We cannot eliminate the idea of bribing and cheating without students' help."

    With information provided by students and professors, Mr. Sadrislamov goes to the rector to get investigations under way. At least one professor volunteered to quit after he was confronted by Kazan State's anticorruption council, which comprises the rector, his deputies, the security department, and some students. The group meets monthly to discuss the anticorruption campaign.

    The security chief says it will take awhile to rid the campus of corruption, because it is so ingrained.

    "I do not believe that professors commit crime because of their low salaries," he says. "They take bribes because it has gone unpunished. That is the real picture in every Russian university all across the country."

    Russian professors' salaries are very low. At Kazan State, they make 20,000 to 25,000 rubles a month, or about $667 to $833.

    "That is not enough to feed the family. People break the law out of need—they have no option," says one professor at the university, who did not want his name to be used.

    Students have mixed views about the corruption campaign. In a conversation among a group of students from the law department, considered to be among the most corrupt, many scoffed at talk of reform.

    "Law-enforcement agencies should reform first," said one student, who declined to give his name but said he was the son of an agent in the Federal Security Service, a successor agency to the KGB. "Russia is rotten of corruption. Even the president admits that. I do not believe somebody could put the end to it on our campus."

    The reformers seem undeterred by such skepticism.

    "Some say we are too naïve to believe that the old traditions can be changed; some avoid even talking to us. But there are students who agree the disease can be treated," says Dmitry Modestov, a third-year student who works with classmates on developing pens, fliers, and other materials with anticorruption slogans.

    "We are trying to change the mind-set on our campus. We say, Knowledge is worth more than bribes."

    A Reform Effort Backfires Efforts to combat corruption on a national scale have so far failed to have much of an effect.

    In 2001, Russia introduced an SAT-like test known as the Unified State Exam. It was created in large measure to eliminate corruption in the college-entrance process. Colleges were to rely primarily on exam results in determining who should be admitted. Last year was the first in which testing became obligatory nationally.

    But instead of reducing corruption, the exam apparently has fostered it. Claims arose that exam results were being tampered with by local officials whose job it is to administer the test.

    Another avenue of abuse is the so-called "discount" for students with special needs and children of state employees.

    Universities are obliged to accept lower scores on the Unified State Exam from members of those groups, which comprise 153 categories, including handicapped students, children of Chernobyl victims, and orphans.

    The fixed price for obtaining the needed papers to be labeled as a member of a discount group is 70,000 rubles, or $2,300, says Deliara Yafizova, a first-year student at Kazan State.

    "I entered without a bribe, but I heard that there was a price for making life easier," she said one recent morning in the campus cafe.

    Mr. Salakhov, the rector, saw the problem firsthand when he looked at the applicants for this year's first-year class. "All of a sudden we had crowds of handicapped students applying to our university," he says. "At one department I had 36 handicapped students per 30 available seats. We tried to check every case, especially the cases where it said that the disability expired in two to three months. Many of these disabled children turned out to have parents working as hospital managers. Their papers turned out fake."

    Of the 1,358 full-time students admitted to Kazan State this academic year, more than 250 were from discount categories.

    "That is a tiny little opportunity for universities to stay corrupt," says Mr. Salakhov. "If a big bureaucrat from, say, the ministry of education sends his son with a letter of support to a rector, the university might have to admit that son. But not at this university. We do not let in students with just any score, no matter how high-rank their parents are."

    As for reporting scores themselves, state-exam corruption has taken on absurd proportions, driven by regional bureaucrats' desire to ensure that the scores of students admitted to local colleges are better than average.

    For example, students in Kab­ar­dino-Balkaria and Ingushetia, areas of economic hardship and low-level insurgency near Chechnya, achieved record scores last summer in the Russian-language exam. Yet Russian is not the native language of most residents there.

    In another instance, Lyubov Glebova, head of the Federal Service for the Oversight of Education and Science, flew to Voronezh, in the southern part of the country, as soon as she found out that students' scores in the city were the highest on most of the seven parts of the national exam.

    "You are the country's leaders on Unified State Exam results," she announced at the regional meeting of school and higher-education authorities in Voronezh. Unaware that she was about to accuse them of tampering with test scores, the crowd of local bureaucrats applauded her statement.

    Ms. Glebova fired the head of the regional education authority, and several exam organizers will not be allowed to continue in those roles this year.

    Russia still lives with the Soviet mentality of keeping information secret and presenting fake pictures of life, says Yevgeny Yasin, director of research at the State University Higher School of Economics, in Moscow. Even so, in a country where people tend to follow the signals given by authorities, he is hopeful.

    "It will take a little longer," he says, "but the time of transparency will eventually come to the Russian education system, as it did to many Western countries."

    Continued in article

    Jensen Comment
    A more reliable and probably much cheaper alternative would be instead adopt competency-based grading and degree awarding. Two North American universities using competency-based courses are the accredited online undergraduate Western Governors University (WGU) and the Canadian masters degree program at Chartered Accounting School of Business (CASB). Both programs have a reputation for integrity and toughness.

    Competency-Based Learning --- http://en.wikipedia.org/wiki/Western_Governors_University#Competency-Based_Learning


    Educational Competency Assessment (ECA) Web Site --- http://www.aicpa-eca.org/
    The AICPA recently won a National Association of Colleges and Employers (NACE) Excellence Award for Educational Programming for developing this ECA site to help accounting educators integrate the skill-based competencies needed by entry-level accounting professionals.

    The AICPA provides this resource to help educators integrate the skills-based competencies needed by entry-level accounting professionals. These competencies, defined within the AICPA Core Competency Framework Project, have been derived from academic and professional competency models and have been widely endorsed within the academic community. Created by educators for educators, the evaluation and educational strategies resources on this site are offered for your use and adaptation.

    The ECA site contains a LIBRARY that, in addition to the Core Competency Database and Education Strategies, provides information and guidance on Evaluating Competency Coverage and Assessing Student Performance.

    To assist you as you assess student performance and evaluate competency coverage in your courses and programs, the ECA ORGANIZERS guide you through the process of gathering, compiling and analyzing evidence and data so that you may document your activities and progress in addressing the AICPA Core Competencies.


    Some years back the Texas State Board of Public Accountancy (TSBPA) declared war on distance education by requiring a minimum of five semester courses (15 credits) of accounting onsite instead of online ---
    http://www.cs.trinity.edu/~rjensen/temp/TexasBigBrother.htm

    Large universities in Texas such as the University of Texas and Texas A&M have extensive online degree programs in such areas in science and engineering, but not in accountancy where very large and highly-rated onsite accounting degree programs have shown virtually no interest in reaching out to students who are unable to attend classes on campus. In fact, I've suspected for a long time that these major universities have pressured the TSBPA to discourage distance education.

    Western Governors University --- http://en.wikipedia.org/wiki/Western_Governors_University

    WGU is a competency-based online university where course instructors do not assign grades. Instead the grading is competency based much like professional certification examinations such as the CPA Examination and medical board examinations ---
    http://faculty.trinity.edu/rjensen/assess.htm#ComputerBasedAssessment

    "WGU Lassoes Texas," by Steve Kolowich, Inside Higher Ed, August 4, 2011 ---
    http://www.insidehighered.com/news/2011/08/04/governor_perry_partners_with_western_governors_university

    Western Governors University continued to live up to its name on Wednesday, as Texas Governor Rick Perry announced a partnership with the fast-growing online institution — and was promptly showered with praise from nearly everyone.

    Western Governors, a regionally accredited, nonprofit university founded in 1997 by 18 politicians who held that office at that time, represents an alternative model of higher education that has garnered both praise and skepticism.

    Aimed at working adults (the average student is 36), Western Governors confers bachelors and master’s degrees based on a student’s ability to demonstrate skills. There are no classrooms and no professors. Students learn online and mostly on their own, with light guidance from their advisers. They take proctored tests at local testing centers whenever they feel they are ready. Students pay tuition — between $2,890 and $4,250, depending on the program — every six months until they graduate, which 40 percent of them do within four years. (First-time, full-time students are considerably less successful, graduating at a 22 percent rate.)

    The partnership with Texas will create a state-branded version of Western Governors called WGU-Texas. Texas is the third state to create a local version of Western Governors, which is based in Salt Lake City, Utah; Indiana Governor Mitch Daniels created WGU-Indiana last summer, and the Washington State legislature voted WGU-Washington into existence earlier this year.

    Like Indiana and Washington, Texas will not allocate any money out of its state budget to Western Governors, which supports itself based on tuition. However, a Western Governors spokeswoman says the university is currently working with Texas officials to allow Texas residents to spend in-state financial aid grants on the Utah-based institution.

    Amid deep cuts to public higher education budgets, Governor Perry earlier this year challenged state institutions to come up with some way to offer a four-year degree program for the total price of $10,000. Alas, WGU-Texas is not the answer to that challenge, said Catherine Frazier, a Perry spokeswoman. The average Western Governors graduate earns a degree in 30 months, or five pay periods; including fees, that means $14,735 for the least expensive degrees (information technology and business), and $21,890 for the most expensive (nursing pre-licensure).

    “But, certainly, having this affordable option does prove that a degree can be offered by an institution at an affordable price,” Frazier said.

    In its effort to expand into various states, Western Governors has faced criticism from some educators, particularly in Washington state. “[B]rain research demonstrates that real learning requires students to struggle with difficult material under the consistent guidance of good teachers,” wrote Johann Neem, an associate professor of history at Western Washington University, in an April op-ed for The Seattle Times. “WGU denies students these opportunities. In fact, its advertisements pander to prospective students by offering them credit for what they already know rather than promising to teach them something new.”

    But advocates say the Western Governors model has its place in the constellation of state higher education systems. For adult students who possess the knowledge and skills to bypass a chunk of the curriculum — either because they have some prior college or because they have picked it up in their working lives — the competency-based model is a good way to avoid the tedium and expense of sitting through redundant classes, the Center for Adult and Experiential Learning has said.

    “The idea is that these adult learners will bring certain skills and knowledge to the table and that they [will] be able to use them to accelerate progress toward an academic degree and advance in the workforce,” said Dominic Chavez, a spokesman for the Texas Higher Education Coordinating Board, in an e-mail. “While students will typically be able to gain course credit for having specific knowledge in certain areas, students reach a point at which they acquire new knowledge and skills beyond their existing levels,” Chavez said. “These are the skills that take them to the next level and that offer increased workforce opportunities.”

    The WGU-Texas announcement met with glowing praise elsewhere. The partnership “will help address our state's key workforce needs while offering affordable career and continuing education opportunities to Texans over 30," said State Senator Judith Zaffirini, a Democrat who chairs the state senate’s higher education committee, in a statement.

    “This low-cost alternative will expand access to more Texans, engaging our diverse student population and upholding our statewide commitment to help more students reach their academic and lifelong goals,” wrote the Texas Coalition for Excellence in Higher Education, a group of former administrative heavyweights from the Texas higher ed system who have challenged much of Governor Perry's higher education agenda.

    Rey Garcia, president of the Texas Association of Community Colleges, said his organization was planning a statewide articulation agreement with WGU-Texas that would make it easy for students to finish their bachelor’s degrees at Western Governors after two years at community college. “The traditional universities don’t make it terribly easy for students with an applied science degree [at a community college] to transfer into a baccalaureate,” Garcia said in an interview. “WGU is a lot more flexible in that regard.”

    Garcia added that he is not worried students will skip the community colleges altogether and opt for all four years at WGU-Texas because “they’re considerably more expensive than we are.”

    But Mary Aldridge Dean, executive director of the Texas Faculty Association, said prospective students — especially younger ones — should consider more than just the price tag when considering enrolling at WGU-Texas.

    Continued in article

    Question
    Why can't the highest scoring CPA Exam taker in the nation probably can't become a licensed CPA in Texas?

    Answer
    Because in Texas, unlike the other 49 states, nobody can become a CPA without having taken at least five accounting courses onsite. Distance education graduates need not apply for a CPA certificate if they have distance education degrees and/or did not take about half of the required accounting, auditing, and tax courses onsite instead of online.

    In effect this means that Texas does not allow full distance education accounting degrees such that even flagship universities like Texas and Texas A&M like flagship universities in Connecticut, Wisconsin, and Maryland have distance education accounting degrees.

    March 31, 2011 message from Barbara Scofield

    In the state of Texas educators are struggling with ever more onerous rules for candidacy. The AICPA, however, seems to be ignoring issues that loom large for the TSBPA. One of their newly featured "new CPAs" at the link below is an award winner from Colorado (not a 150 hour state) who took her accounting courses online (Texas requires 15 credit hours face to face of upper division accounting courses) from DeVry.

    http://www.thiswaytocpa.com/exam-licensure/exam-diary/leslie-rezgui/

    Could this person work as a CPA in Texas?

    Barbara W. Scofield, PhD, CPA
    Chair of Graduate Business Studies
    Professor of Accounting
    The University of Texas of the Permian Basin
    4901 E. University Dr. Odessa, TX 79762
    432-552-2183 (Office)

    November 5,. 2010 reply from Bruce Lubich <BLubich@umuc.edu>
    Note that Bruce is the Director of an online accounting distance education program in the University of Maryland System

    Hi Bob,  

    When TX first went to the 15 credit requirement, we had a couple of  University of Maryland University College students apply for the exam there,  and be rejected. Our transcript doesn't show which courses were taken  online. Apparently it's on the TX paperwork. Lying on that is not  something to be encouraged for future CPAs. So, unless a student has no  desire to sit for the CPA exam or they just need to fill in a few holes to  qualify, the TX market has dried up for all online programs.

    Evidently, the  TX board takes this requirement very seriously, so my guess is that your  Deloitte hire would be denied the ability to sit. Seems to me Deloitte  would need to send the student to a different office until they pass the  exam.   As for reciprocity, I haven't heard of any problems. That doesn't mean  they're not out there, but I haven't heard of them.   Bottom line is TX has protected their investment in their brick & mortar  schools.   At one time LA and New Mexico had similar, though weaker rules like this.  I believe both have woken up and done away with those rules.  

    Bruce Lubich 
    University of Maryland University College

    November 6, 2010 reply from Bob Jensen

    Hi Bruce,

    Thanks for this.
    What you are saying is that the Texas Board may be cooperating with Texas Universities to reserve all entry-level accounting jobs in Texas for only graduates of Texas universities. Graduates from your program in the University of Maryland system can, thereby, not compete for jobs in Texas CPA firms. .

    Out-of-state graduates need not apply. Seems like a great idea for the other 49 states so that graduates of a given state have a monopoly on jobs within the state. Of course the national and international CPA firms might object to complications this creates in hiring. And students who want to leave a state might object to not having jobs available anywhere other than the state where they graduated.

    Why didn't the European Union think of this as a clever way of restricting labor flows between borders?

    Bob Jensen

    My threads (rough draft notes) on this antiquated and absurd ruling by the TSBPA (read that Big Brother) can be found at
    http://www.cs.trinity.edu/~rjensen/temp/TexasBigBrother.htm

     

     


    Online Education Effectiveness and Testing

    Respondus Testing Software

    October 13, 2009 message from Richard Campbell [campbell@RIO.EDU]

    For anyone teaching online, this software is a "must-have". They have released a new (4.0) version with improved integration of multimedia. Below are some videos (created in Camtasia) that demonstrate key features of the software.

    http://www.respondus.com/

    They have tightened up the integration with publisher test banks.
    Richard J. Campbell

    mailto:campbell@rio.ed

    May 20, 2010 message from Richard Campbell [campbell@RIO.EDU]

    Respondus is a very powerful test generator and most publishers provide test banks in that format.
    http://www.screencast.com/t/NTdlNzAw

    Richard J. Campbell
    School of Business
    218 N. College Ave.
    University of Rio Grande
    Rio Grande, OH 45674
    Voice:740-245-7288

    http://faculty.rio.edu/campbell

    Bob Jensen's threads on tools and tricks of the trade ---
    http://faculty.trinity.edu/rjensen/000aaa/thetools.htm


    Learning Effectiveness in Corporate Universities
    A group of colleges that serve adult students on Monday formally announced their effort to measure and report their effectiveness, focusing on outcomes in specific programs. The initiative known as “Transparency by Design, on which Inside Higher Ed reported earlier, has grown to include a mix of 10 nonprofit and for-profit institutions: Capella University, Charter Oak State College, Excelsior College, Fielding Graduate University, Franklin University, Kaplan University, Regis University, Rio Salado College, Western Governors University, and Union Institute & University.
    Inside Higher Ed, October 23, 2007 --- http://www.insidehighered.com/news/2007/10/23/qt


    "Cheating in Online Courses," Dan Ariely, August 2012 ---
    http://danariely.com/2012/08/10/cheating-in-online-courses/

    Jensen Comment
    f there is more cheating in online courses, the fault lies with the internal controls of the online system rather than the difference between online versus onsite systems per se. Cheating is largely the fault of the online and onsite instructors and their universities. There are controls (not costless) to reduce online cheating to levels below those of onsite courses ---
    http://faculty.trinity.edu/rjensen/Assess.htm#OnlineOffCampus
    For example, observing a student taking an online test can be more one-on-one observation with the proper Webcam procedures or with hiring the Village Vicar to or a Sylvan Systems to proctor the examination.

    Another approach is to outsource proctoring to local K-12 teachers.


    Respondus Monitor - online exams proctor ---
    http://youtu.be/lGyc_HBchOw


    One of the selling points of for-profit universities is that they are more open to non-traditional students vis-à-vis nonprofit traditional colleges and universities. This is thus a "diversity" selling point for for-profit universities.

    However, one of the drawbacks is that when traditional colleges and universities attempt to be more open to diversity and admission of non-traditional students, there are huge problems of enforcing academic standards and serious possibilities that most of the non-traditional students will not graduate.

    Here's how some for-profit universities deal unethically with assessment issues. It's a small wonder that for-profit universities are very popular with non-traditional students.

    "Undercover Probe Finds Lax Academic Standards at Some For-Profit Colleges," by Kelly Field, Chronicle of Higher Education, November 22, 2011 ---
    http://chronicle.com/article/Undercover-Probe-Finds-Lax/129881/?sid=wc&utm_source=wc&utm_medium=en

    An undercover investigation by the Government Accountability Office has found evidence of lax academic standards in some online for-profit programs.

    The probe, which is described in a report made public Tuesday, found that staff at six of the 12 colleges that enrolled the investigators tolerated plagiarism or awarded credit for incomplete or shoddy work.

    The release of the report, "For-Profit Schools: Experiences of Undercover Students Enrolled in Online Classes at Selected Colleges," comes roughly a year after the accountability office revised an earlier report on recruiting abuses at for-profit colleges, acknowledging errors and omissions in its findings. A coalition of for-profit colleges has sued the office over that report, accusing its investigators of professional malpractice.

    In that earlier investigation, the office sent undercover investigators to 15 for-profit colleges to pose as prospective students. It found widespread deception in recruiting by the colleges, with many employees providing students with false or misleading information about graduation rates, job prospects, or earning potential.

    This time, the agents attempted to enroll in online programs at 15 for-profit colleges using a home-school diploma or a diploma from a closed high school. Twelve of the colleges accepted them.

    The "students" then proceeded to skip class, plagiarize, and submit "substandard" work. Though several ultimately failed their classes, some got credit for shoddy or plagiarized work along the way.

    At one college, a student received credit for six plagiarized assignments; at another, a student submitted photos of political figures and celebrities in lieu of an essay, but still earned a passing grade. A third student got full credit on a final project, despite completing only two of the three required components. That same student received full credit for an assignment that had clearly been prepared for another class.

    In two cases, instructors confronted students about their repeated plagiarism but took no disciplinary action against them. One student received credit for a response that was copied verbatim from other students' discussion posts.

    Instructors at the other six colleges followed their institutions' policies on grading and plagiarism, and in some cases offered to help students who appeared to be struggling.

    All of the students ultimately withdrew or were expelled from the programs. Three of the colleges failed to provide the departing students with federally required exit counseling about their repayment options and the consequences of default.

    Sen. Tom Harkin, Democrat of Iowa, who requested the report, said its findings "underscore the need for stronger oversight of the for-profit education industry."

    "It is obvious that Congress must step in to hold this heavily federally subsidized industry more accountable," he said.

    Continued in article

    Jensen Comment
    This makes me wish that similar investigations (audits?) be expanded to huge samples of nonprofit colleges and universities where grade inflation is also rampant.

    Most universities now have financial internal auditors and are subjected to governmental or independent CPA audits. But few have independent audits of the  variability in academic standards between departments and between individual faculty members.

    Bob Jensen's threads on For-Profit Universities Operating in the Gray Zone of Fraud ---
    http://faculty.trinity.edu/rjensen/HigherEdControversies.htm#ForProfitFraud

    November 28, 2011 reply from David Albrecht

    Bob, I agree with your comment that the study could have been expanded. As it is, the study is hardly scientific. The sample size is small, and we have no idea whether lax standards, instructor negligence, or instructor mercy are responsible for the actions. In traditional schools, whether they be state funded or private, I wonder if more abuses would be found among tenure-track or non-tenure-track profs.

    Dave Albrecht

    November 28, 2011 reply from Bob Jensen

    Hi David,

    In my opinion, grade inflation and lax academic standards may be more of a problem for tenured professors than probationary (non-tenured) professors on tenure track and maybe even for adjunct professors (but adjuncts are so variable it's hard to draw generalizations).

    I will provide an example of non-tenured faculty who are on tenure tracks at Trinity University. Such probationary faculty are under severe scrutiny by their immediate departmental faculty and upper-level university committees. There's heavy pressure on all faculty involved to warn probationary faculty about inadequate versus adequate progress toward tenure. The hope is that all nontenured faculty not making adequate progress by year six will have been terminated such that all faculty going up for tenure have highly probable chances of not being rejected.

    Included in what Trinity calls "probationary reviews" as well as final "tenure applications" are teaching evaluations, grading distributions for each course, copies of examinations in each course, copies of course syllabi, and self-review statements of candidates. There are also external (off-campus) reviews in tenure applications, but these are mostly focused on research and publication.

    Tenured faculty are not subjected to such rigorous reviews, and hence a few tenured faculty in my viewpoint become more lax about academic standards. Hopefully these are just outliers. There is a rigorous review of associate professors at times when they apply for full professorships. These are much like tenure applications and require a truckload of teaching evaluations, grading distributions for each course, copies of examinations in each course, copies of course syllabi, and self-review statements of candidates. There are also external (off-campus) reviews in full-professorship applications, but these are mostly focused on research and publication.

    In my 24 years at Trinity University I was completely surprised by proportion of hired tenure track faculty that were terminated before even reaching the tenure application stage. I was also even more surprised by some of the tenure applicants and full-professor applicants who were rejected by the P&T Committee and/or the President of the University.

    I was also surprised in some years by the some of the long-term tenured faculty (some of whom were lifetime associate professors) who had their tenure contracts bought out by deals made with the President of the University. In some cases those buyouts were either for lackluster teaching and/or lackluster academic standards.

    Of course there were also a few faculty members who had some other dysfunctional behavior leading to buyouts. One of my friends had an early onset of dementia and was somewhat of a problem even after termination (on a generous early retirement package), because he continued to hang around computer labs and the campus library and showed  off his vanity press "research" book that was garbage to the point of embarrassment. He claimed that proper exercise could prevent all forms of cancer.


    Some campus officials and faculty, including me, breathed a sigh of relief when he eventually died and stopped giving his vanity press book away for free around Texas.

    Of course there are also those who will breathe a sigh of relief when one of their retired faculty members stops sending so many messages to the AECM.

    Respectfully,
    Bob Jensen

    "The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning," (Not Free), Chronicle of Higher Education, November 2011 ---
    https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc

    The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning. As online learning spreads throughout higher education, so have calls for quality control and assessment. Accrediting groups are scrambling to keep up, and Congress and government officials continue to scrutinize the high student-loan default rates and aggressive recruiting tactics of some for-profit, mostly online colleges. But the push for accountability isn't coming just from outside. More colleges are looking inward, conducting their own self-examinations into what works and what doesn't.

    Also in this year's report:
     
    • Strategies for teaching and doing research online
    • Members of the U.S. military are taking online courses while serving in Afghanistan
    • Community colleges are using online technology to keep an eye on at-risk students and help them understand their own learning style
    • The push to determine what students learn online, not just how much time they spend in class
    • Presidents' views on e-learning
    Bob Jensen's threads on asynchronous learning ---
    http://faculty.trinity.edu/rjensen/255wp.htm

    Bob Jensen's threads on online course and degree programs ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm

     

     


    "The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning," (Not Free), Chronicle of Higher Education, November 2011 ---
    https://www.chronicle-store.com/Store/ProductDetails.aspx?CO=CQ&ID=78602&cid=ol_nlb_wc

    The Chronicle's special report on Online Learning explores how calls for quality control and assessment are reshaping online learning. As online learning spreads throughout higher education, so have calls for quality control and assessment. Accrediting groups are scrambling to keep up, and Congress and government officials continue to scrutinize the high student-loan default rates and aggressive recruiting tactics of some for-profit, mostly online colleges. But the push for accountability isn't coming just from outside. More colleges are looking inward, conducting their own self-examinations into what works and what doesn't.

    Also in this year's report:
     
    Bob Jensen's threads on asynchronous learning ---
    http://faculty.trinity.edu/rjensen/255wp.htm

    Bob Jensen's threads on online course and degree programs ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm

     


    "Keeping an Eye on Online Students," by Andrea L. Foster, Chronicle of Higher Education, July 21, 2008 ---
    http://chronicle.com/wiredcampus/index.php?id=3181&utm_source=wc&utm_medium=en 

    Technology vendors are eager to sell college officials hardware and software designed to verify the identify of online students—and thereby prevent cheating. A free article in The Chronicle describes some of the technologies that colleges are trying out to make certain that the person taking an online exam is, in fact, the student enrolled in the course. The technologies include Web cameras that watch students taking tests and scanners that capture students’ fingerprints.

    A provision in a bill reauthorizing the Higher Education Act is fueling much of the interest in this issue. A paper released in February by the Western Interstate Commission for Higher Education says the provision—while not onerous to most distance-learning providers—could “drive up the cost of these important education programs.”

    And some online institutions fear that the provision would require them to have their students travel to distant locations to take proctored exams on paper. The result? Some states would conclude that the institutions have a “physical presence” in their states, and would subject the institutions to “a whole new set of state regulations,” says John F. Ebersole, president of Excelsior College.


    Question
    What are some of the features of UserView from TechSmith for evaluating student learning

    Some of the reviews of the revised “free” Sound Recorder in Windows Vista are negative. It’s good to learn that Richard Campbell is having a good experience with it when recording audio and when translating the audio into text files --- http://microsoft.blognewschannel.com/archives/2006/05/24/windows-vista-sound-recorder 

    For those of you on older systems as well as Vista there is a free recorder called Audacity that I like --- http://audacity.sourceforge.net/ 
    I really like Audacity. There are some Wiki tutorials at http://audacity.sourceforge.net/help/tutorials 
    Some video tutorials are linked at http://youtube.com/results?search_query=audacity+tutorial&search=Search 

    I have some dated threads on speech recognition at http://faculty.trinity.edu/rjensen/speech.htm  Mac users can find options at http://www.macspeech.com/ 

    In addition, I like Camtasia (recording screen shots and camera video) and Dubit (for recording audio and editing audio) from TechSmith --- http://www.techsmith.com/ 
    TechSmith   products are very good, but they are not free downloads.

    UserView --- http://www.techsmith.com/uservue/features.asp 
    TechSmith has a newer product called UserView that really sounds exciting, although I’ve not yet tried it. It allows you to view and record what is happening on someone else’s computer like a student’s computer. Multiple computers can be viewed at the same time. Images and text can be recorded. Pop-up comments can be inserted by the instructor to text written by students.

    UserView can be used for remote testing!

    Userview offers great hope for teaching disabled students such as sight and/or hearing impaired students --- http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Handicapped


    "Ways to prevent cheating on online exams," by Gail E. Krovitz, eCollege Newsletter, Vol 8, Issue 6 November 15, 2007 ---
    http://www.ecollege.com/Educators_Voice.learn

    • Write every exam as if it is open book. As much as we try to convince ourselves otherwise, we need to assume that students use resources on their exams (the book, Internet search engines and so on) and write our exams accordingly. Are all of our questions asking for information that can be gathered quickly from the textbook or from a simple Internet search? Then we should re-think our questions (see following guideline). Open-book exams have the potential to test higher level thinking skills, instead of just memorizing facts. Unfortunately, scores on open-book exams are often lower, as students don’t take exam preparation as seriously when they know they can use their book, so training in open-book exam-taking skills would be helpful (Rakes).
    • Write effective multiple-choice exam questions. Because it is so easy to use prohibited materials during online exams, it is foolish to design tests that simply test factual information that is easily looked up. Although it is difficult to do, online exams are most effective when they test higher order thinking skills (application, synthesis and evaluation) and ask questions that cannot be answered by glancing at the book or a quick internet search.  See Christe, Dewey and Rohrer for more information about developing quality multiple-choice questions.
    • Set tight time limits per question. Even with open book exams (and especially for ones that are not open book), it is important to give a tight time frame for the test, so students will not have time to look up each question in the book. The time limit chosen will obviously vary depending on subject matter, type of questions asked, etc. For strict fact recall, instructors might start by giving a total time based on allowing 60- 90 seconds per question and then adjusting as necessary based on their student body. More time would need to be given for higher-level thinking questions or for those involving calculations.
    • Use large question pools to offer different, randomly-selected questions to each student. See “Tip: getting the most out of exam question pools” for a good description of using question pools in the eCollege system. The question pools must be large enough to minimize overlap of questions between tests. Rowe provides a chart comparing the average number of questions in common for two students with different question pool sizes and different numbers of questions drawn from the pool. For example, 5 questions drawn from a pool of 10 questions results in 2.5 questions in common between two students, while 5 questions drawn from a pool of 25 questions results in only 1 question in common between two students. You can consult the mathematical formula or go with common sense: a larger question pool is better for reducing the likelihood that students will get the same questions.  
    • Manually create different versions of the exam with the same general question pools, but with scrambled answers for each question. For example, in one version of the exam, the correct answer could be B, while the answer choices are scrambled in the other version so the correct answer is D. You could use the Group function to assign half of the class to one exam, and the other half the class to the other one. Cizek cites research showing that scrambling questions and answer choices does reduce cheating, while simply changing the order of the same questions does not reduce cheating.  In fact, in a study of student’s perceived effectiveness of cheating prevention strategies, having scrambled test forms was the number one factor perceived by students to prevent cheating (Cizek).
    • Assign a greater number of smaller tests instead of one or two large ones. This reduces the incentive to cheat, as each test isn’t as likely to make or break a student’s grade; the pressure of the midterm and final-only structure in some classes is a strong incentive to cheat on those exams. Also, this increases the logistical difficulties of cheating if a student is relying on someone else to help them or to take the test for them.
    • Provide a clear policy for what happens if students cheat… and enforce it! There are many important things instructors can do from this perspective, such as discussing what constitutes cheating, the importance of academic honesty, any honor codes in place, what measures will be in place to prevent and detect cheating and the punishments for cheating. If students perceive that the instructor does not care about cheating, then incidents of both spontaneous and planned cheating increase (Cizek). Students know that most cheaters don’t get caught and that punishments aren’t harsh for those who do get caught (Kleiner and Lord). Research has found that punishment for cheating is one of the main deterrents to cheating (Kleiner and Lord).
    • Set the exam Gradebook Review Date for after the exam has closed.  The Gradebook Review Date is when the students can access their graded exam in the Gradebook. If this date is set before the end of the exam, students who take the exam early could access their exam in the Gradebook (and usually the correct answers as well) and distribute the questions to students who would take the exam later.    
    • Revise tests every term.  Sooner or later exam questions are likely to get out into the student world and get distributed between students. This is especially possible when students view their graded exams in the Gradebook, as they have all the time in the world to copy or print their questions (usually with the correct answers provided). Periodic changes to the test bank can help minimize the impact of this. Minor changes such as rewording the questions and changing the order of answers (especially if different versions with scrambled answers are not used) can help extend the useful life of a test bank.
    • Use ExamGuardTM if the feature is available at your school. ExamGuard prohibits the following actions while students are taking online exams: printing, copying and pasting anything into or from the assessment, surfing the Web, opening or using other applications, using Windows system keys functions or clicking on any other area within the course. Also note that ExamGuard prohibits students from printing or copying exam materials while viewing the exam in the Gradebook.  If you are interested in learning more about ExamGuard, please contact your Account Executive or Client Services Consultant.
    • Give proctored exams in a traditional classroom. While this is not an option for many online courses, it is a route that some schools take, especially if they largely serve a local population. With proctored exams, instructors feel more in control of the testing environment and more able to combat cheating in a familiar classroom setting (or at least to have cheating levels on par with those seen in a traditional exam setting). In a study on cheating in math or fact-based courses, Trenholm concludes that proctoring is “the single greatest tool we presently have to uphold the integrity of the educational process in instruction in online MFB (math or fact based) courses” (p. 297).  Also, Cizek showed that attentive proctoring reduced cheating directly and by giving the impression that academic integrity is valued.

     

    December 1, 2007 reply from Charles Wankel [wankelc@VERIZON.NET]

    Thanks Bob for sharing.

    Some of the points seem to fall back to face-to-face course ideas but others were very helpful. I found the emphasis on higher order thinking skills (application, synthesis and evaluation) to be a great one. I am going to try to work on putting synthesis into my students’ assignments and projects.

    Charlie Wankel

    St. John’s University,
    New York

    December 1, 2007 reply from David Raggay [draggay@TSTT.NET.TT]

    Please be so kind as to refer me to the specific article or articles wherein I can find a discussion on “higher order thinking skills (application, synthesis and evaluation)”

    Thanks,

    David Raggay,
    IFRS Consultants,
    Trinidad and Tobago

    December 1, 2007 reply from Bob Jensen

    Hi David,

    There are several tacks to take on this question. Charlie provides some key words (see above).

    I prefer to think of higher order metacognition --- http://en.wikipedia.org/wiki/Metacognition
    For specific examples in accounting education see http://faculty.trinity.edu/rjensen/265wp.htm
    One of the main ideas is to make students do their own discovery learning. Blood, sweat, and tears are the best teachers.

    Much of the focus in metacognitive learning is how to examine/discover what students have learned on their own and how to control cheating when assessing discovery and concept learning --- http://faculty.trinity.edu/rjensen/assess.htm 

    Higher order learning attempts to make students think more conceptually. In particular, note the following quotation from Bob Kennelly at http://faculty.trinity.edu/rjensen/assess.htm#ConceptKnowledge 

    We studied whether instructional material that connects accounting concept discussions with sample case applications through hypertext links would enable students to better understand how concepts are to be applied to practical case situations.

    Results from a laboratory experiment indicated that students who learned from such hypertext-enriched instructional material were better able to apply concepts to new accounting cases than those who learned from instructional material that contained identical content but lacked the concept-case application hyperlinks.

    Results also indicated that the learning benefits of concept-case application hyperlinks in instructional material were greater when the hyperlinks were self-generated by the students rather than inherited from instructors, but only when students had generated appropriate links.

    Along broader lines we might think of it in terms of self-organizing of atomic-level knowledge --- http://en.wikipedia.org/wiki/Self-organization 

    Issues are still in great dispute on the issues of over 80 suggested “learning styles” --- http://en.wikipedia.org/wiki/Learning_styles
    Assessment and control of cheating are still huge problems.

     Bob Jensen

    December 2, 2007 reply from Henry Collier [henrycollier@aapt.net.au]

    G’day Bob … I’m not sure whether David is asking for the Bloom citation or not. I do not disagree with your post in any way, but wonder if David is looking for the ‘start’ of the art/science. I have also suggested that he may want to look at Bob Gagne’s approach to the same issues. Perhaps William Graves Perry’s 1970 book could / would also be useful.

    Best regards from spring time in New South Wales where the roses in my garden are blooming and very pretty.

    Henry

    New Technology for Proctoring Distance Education Examinations
    "Proctor 2.0," by Elia Powers, Inside Higher Ed, June 2, 2006 --- http://www.insidehighered.com/news/2006/06/02/proctor

    Bob Jensen's threads on online versus onsite assessment are at http://faculty.trinity.edu/rjensen/assess.htm#OnsiteVersusOnline

    Bob Jensen's threads on cheating are at http://faculty.trinity.edu/rjensen/Plagiarism.htm


    "Far From Honorable," by Steve Kolowich, Inside Higher Ed, October 25, 2011 ---
    http://www.insidehighered.com/news/2011/10/25/online-students-might-feel-less-accountable-honor-codes

    Much of the urgency around creating a “sense of community” in online courses springs from a desire to keep online students from dropping out. But a recent paper suggests that strengthening a sense of social belonging among online students might help universities fight another problem: cheating.

    In a series of experiments, researchers at Ohio University found that students in fully online psychology courses who signed an honor code promising not to cheat broke that pledge at a significantly higher rate than did students in a “blended” course that took place primarily in a classroom.

    “The more distant students are, the more disconnected they feel, and the more likely it is that they’ll rationalize cheating,” Frank M. LoSchiavo, one of the authors, conjectured in an interview with Inside Higher Ed.

    While acknowledging the limitations inherent to a study with such a narrow sample, and the fact that motivations are particularly hard to pin down when it comes to cheating, LoSchiavo and Mark A. Shatz, both psychology professors at Ohio University's Zanesville campus, said their findings may indicate that meeting face-to-face with peers and professors confers a stronger sense of accountability among students. “Honor codes,” LoSchiavo said, “are more effective when there are [strong] social connections.”

    Honor codes are not, of course, the only method of deterring cheating in online courses. The proliferation of online programs has given rise to a cottage industry of remote proctoring technology, including one product that takes periodic fingerprint readings while monitoring a student’s test-taking environment with a 360-degree camera. (A 2010 survey by the Campus Computing Project suggests that a minority of institutions authenticate the identities of online students as a rule.)

    But LoSchiavo said that he and Shatz were more interested in finding out whether honor codes held any sway online. If so, then online instructors might add pledges to their arsenal of anti-cheating tools, LoSchiavo said. If not, it provides yet an intriguing contribution to the discussion about student engagement and “perceived social distance” in the online environment.

    They experimented with the effectiveness of honor codes in three introductory psychology courses at Ohio University. The first course had 40 students and was completely online. These students, like those in subsequent trials, were a mix of traditional-age and adult students, mostly from regional campuses in the Ohio University system. There was no honor code. Over the course of the term, the students took 14 multiple-choice quizzes with no proctoring of any kind. At the end of the term, 73 percent of the students admitted to cheating on at least one of them.

    The second trial involved another fully online introductory course in the same subject. LoSchiavo and Shatz divided the class evenly into two groups of 42 students, and imposed an honor code -- posted online with the other course materials -- to one group but not the other. The students “digitally signed the code during the first week of the term, prior to completing any assignments.” The definition of cheating was the same as in the first trial: no notes, no textbooks, no Internet, no family or friends. There was no significant difference in the self-reported cheating between the two groups.

    In a third trial, the professors repeated the experiment with 165 undergraduates in a “blended” course, where only 20 percent of the course was administered online and 80 percent in a traditional classroom setting. Again, they split the students into two groups: one in which they were asked to sign an honor code, and another in which they were not.

    This time, when LoSchiavo and Shatz surveyed the students at the end of the term, there was a significant difference: Students who promised not to cheat were about 25 percent less likely to cheat than were those who made no such promise. Among the students who had not signed the code, 82 percent admitted to cheating.

    LoSchiavo concedes that this study offers no definitive answers on the question of whether students are more likely to cheat in fully online courses. Cheating is more often than not a crime of opportunity, and containing integrity violations probably has much more to do with designing a system that limits the opportunities to cheat and gives relatively little weight to those assignments for which cheating is hardest to police.

    “The bottom line is that if there are opportunities, students will cheat,” he said. “And the more opportunities they have, the more cheating there will be, and it is incumbent upon professors to put in a system that, when it’s important, cheating will be contained.”

    Continued in article

    Jensen Comment
    I think universities like Trinity University that expanded their honor codes to include student courts are generally happy with the operations of those honor codes. However, Trinity has only full time students and no distance education courses.

    One thing that I hated giving up was grading control. For most of my teaching career I gave F grades to students who seriously cheated in my courses. Under the revised Trinity Honor Code, instructors can no longer control the granting of F grades for cheating.

    When I was a student at Stanford the Honor Code included a pledge to report cheating of other students. I think most universities have watered down this aspect of their honor codes because, in this greatly increased era of litigation, student whistle blowers can be sued big time. Universities may continue to encourage such whistle blowing, but they no longer make students sign pledges that on their honor they will be whistleblowers if they do not want to bear the risk of litigation by students they report.

    Bob Jensen's threads on assessment ---
    http://faculty.trinity.edu/rjensen/Assess.htm


    Accounting Professors in Support of Online Testing That, Among Other Things, Reduces Cheating
    These same professors became widely known for their advocacy of self-learning in place of lecturing

    "In Support of the E-Test," by Elia Powers, Inside Higher Ed, August 29, 2007 --- http://www.insidehighered.com/news/2007/08/29/e_test

    Critics of testing through the computer often argue that it’s difficult to tell if students are doing their own work. It’s also unclear to some professors whether using the technology is worth their while. A new study makes the argument that giving electronic tests can actually reduce cheating and save faculty time.

    Anthony Catanach Jr. and Noah Barsky, both associate professors of accounting at the Villanova School of Business, came to that conclusion after speaking with faculty members and analyzing the responses of more than 100 students at Villanova and Philadelphia University. Both Catanach and Barsky teach a course called Principles of Managerial Accounting that utilizes the WebCT Vista e-learning platform. The professors also surveyed undergraduates at Philadelphia who took tests electronically.

    The Villanova course follows a pattern of Monday lecture, Wednesday case assignment, Friday assessment. The first two days require in-person attendance, while students can check in Friday from wherever they are.

    “It never used to make sense to me why at business schools you have Friday classes,” Catanach said. “As an instructor it’s frustrating because 30 percent of the class won’t show up, so you have to redo material. We said, how can we make that day not lose its effectiveness?”

    The answer, he and Barsky determined, was to make all electronically submitted group work due on Fridays and have that be electronic quiz day. That’s where academic integrity came into play. Since the professors weren’t requiring students to be present to take the exams, they wanted to deter cheating. Catanach said programs like the one he uses mitigate the effectiveness of looking up answers or consulting friends.

    In electronic form, questions are given to students in random order so that copying is difficult. Professors can change variables within a problem to make sure that each test is unique while also ensuring a uniform level of difficulty. The programs also measure how much time a student spends on each question, which could signal to an instructor that a student might have slowed to use outside resources. Backtracking on questions generally is not permitted. Catanach said he doesn’t pay much attention to time spent on individual questions. And since he gives his students a narrow time limit to finish their electronic quizzes, consulting outside sources would only lead students to be rushed by the end of the exam, he added.

    Forty-five percent of students who took part in the study reported that the electronic testing system reduced the likelihood of their cheating during the course.

    Stephen Satris, director of the Center for Academic Integrity at Clemson University, said he applauds the use of technology to deter academic dishonesty. Students who take these courses might think twice about copying or plagiarizing on other exams, he said.

    “It’s good to see this program working,” Satris said. “It does an end run around cheating.”

    The report also makes the case that both faculty and students save time with e-testing. Catanach is up front about the initial time investment: For instructors to make best use of the testing programs, they need to create a “bank” of exam questions and code them by topic, learning objectives and level of difficulty. That way, the program knows how to distribute questions. (He said instructors should budget roughly 10 extra hours per week during the course for this task.)

    The payoff, he said, comes later in the term. In the study, professors reported recouping an average of 80 hours by using the e-exams. Faculty don’t have to hand-grade tests (that often being a deterrent for the Friday test, Catanach notes), and graduate students or administrative staff can help prepare the test banks, the report points out.

    Since tests are taken from afar, class time can be used for other purposes. Students are less likely to ask about test results during sessions, the study says, because the computer program gives them immediate results and points to pages where they can find out why their answers were incorrect. Satris said this type of system likely dissuades students from grade groveling, because the explanations are all there on the computer. He said it also make sense in other ways.

    “I like that professors can truly say, ‘I don’t know what’s going to be on the test. There’s a question bank; it’s out of my control,’ ” he said.

    And then there’s the common argument about administrative efficiency: An institution can keep a permanent electronic record of its students.

    Survey results showed that Villanova students, who Catanach said were more likely to have their own laptop computers and be familiar with e-technology, responded better to the electronic testing system than did students at Philadelphia, who weren’t as tech savvy. Both Catanach and Satris said the e-testing programs are not likely to excite English and philosophy professors, whose disciplines call for essay questions rather than computer-graded content.

    From a testing perspective, Catanach said the programs can be most helpful for faculty with large classes who need to save time on grading. That’s why the programs have proven popular at community colleges in some of the larger states, he said.

    “It works for almost anyone who wants to have periodic assessment,” he said. “How much does the midterm and final motivate students to keep up with material? It doesn’t. It motivates cramming. This is a tool to help students keep up with the material.”

    August 29, 2007 reply from Stokes, Len [stokes@SIENA.EDU]

    I am also a strong proponent of active learning strategies. I have the luxury of a small class size. Usually fewer than 30 so I can adapt my classes to student interaction and can have periodic assessment opportunities as it fits the flow of materials rather than the calendar. I still think a push toward smaller classes with more faculty face time is better than computer tests. One lecture and one case day does not mean active learning. It is better than no case days but it is still a lecture day. I don’t have real lecture days every day involves some interactive material from the students.

    While I admit I can’t pick up all trends in grading the tests, but I do pick up a lot of things so I have tendency to have a high proportion of essays and small problems. I then try to address common errors in class and also can look at my approach to teaching the material.

    Len

     

    Bob Jensen attempts to make a case that self learning is more effective for metacognitive reasons --- http://faculty.trinity.edu/rjensen/265wp.htm
    This document features the research of Tony Catanach, David Croll, Bob Grinaker, and  Noah Barsky.

    Bob Jensen's threads on the myths of online education are at http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Myths


    How a Student Laid Up With a Broken Back Learned From Free Open Sharing Ivy League Courses
    The big issue is how to get transcript credit for his accomplishments?

    The Year 1858

    When the University of London instituted correspondence courses in 1858, the first university to do so, its students (typically expatriates in what were then the colonies of Australia, Canada, India, New Zealand, and South Africa), discovered the programme by word of mouth and wrote the university to enrol.  the university then despatched, by post-and-boat, what today we would call the course outline, a set of previous examination papers and a list of places around the world where examinations were conducted.  It left any "learning" to the hapless student, who sat the examination whenever he or she felt ready:  a truly "flexible" schedule!  this was the first generation of distance education (Tabsall and Ryan, 1999):  "independent" learning for highly motivated and resourceful autodidacts disadvantaged by distance. (Page 71)
    Yoni Ryan who wrote Chapter 5 of
    The Changing Faces of Virtual Education --- http://www.col.org/virtualed/ 
    Dr. Glen Farrell, Study Team Leader and Editor
    The Commonwealth of Learning

    Of course students paid for correspondence courses and they got credit (often they took exams proctored by the village vicar. In days of old, the University of Chicago granted credit via onsite examination --- students did not have to attend courses but had to pay for college degrees earned via examinations. In modern times we usually insist that even online students do more for course credits than merely passing examinations. Examples of other work that's graded include term papers and team projects. which, of course, can be required of online students in addition to examinations that might be administered at test sites like Sylvan testing sites or community colleges that administer examinations for major universities.

    In modern times, countless courses are available online, often from very prestigious universities for credit for students admitted to online programs. Courses from prestigious universities are also free to anybody in the world, but these almost never award degree credits since examinations and projects are not administered and graded. For links to many of the prestigious university course materials, videos lectures, and complete courses go to http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

    One Business Model from Harvard
    The Harvard Business School has a basic accounting course that can be purchased and administered online by other colleges. Of course the credits granted are from College X and not Harvard such that College X must provide instructors for coordinating the course and administering the examinations and projects.
    Financial Accounting: An Introductory Online Course by David F. Hawkins, Paul M. Healy, Michael Sartor Publication date: Nov 04, 2005. Prod. #: 105708-HTM-ENG
    http://harvardbusiness.org/product/financial-accounting-an-introductory-online-course/an/105708-HTM-ENG?Ntt=Basic+Accounting

    "Open Courses: Free, but Oh, So Costly:  Online students want credit; colleges want a working business model," by Marc Parry, Chronicle of Higher Education, October 11, 2009 --- Click Here
    http://chronicle.com/article/Free-Online-Courses-at-a-Very/48777/?sid=wb&utm_source=wb&utm_medium=en

    Steven T. Ziegler leapt to MIT off a mountain.

    He was on a hang glider, and he slammed the ground hard on his chin. Recovery from surgery on his broken back left the 39-year-old high-school dropout with time for college courses.

    From a recliner, the drugged-up crash victim tried to keep his brain from turning to mush by watching a free introductory-biology course put online by the Massachusetts Institute of Technology. Hooked, he moved on to lectures about Cormac McCarthy's novel Blood Meridian from an English course at Yale. Then he bought Paradise Lost.

    A success for college-made free online courses—except that Mr. Ziegler, who works for a restaurant-equipment company in Pennsylvania, is on the verge of losing his job. And those classes failed to provide what his résumé real ly needs: a college credential.

    "Do I put that I got a 343 out of 350 on my GED test at age 16?" he says, throwing up his hands. "I have nothing else to put."

    Related ContentCountries Offer Different Takes to Open Online Learning Students Find Free Online Lectures Better Than What They're Paying For Table: How 4 Colleges Support Free Online Courses Video: A Family Man Dabbles in Ivy-League Learning Enlarge Photo Stan Godlewski At Yale U., technicians record John Geanakoplos, a professor of economics, giving a lecture that will be available free online. Stan Godlewski At Yale U., technicians record John Geanakoplos, a professor of economics, giving a lecture that will be available free online. Enlarge Photo John Zeedick Steven Ziegler cooking dinner at home with his family. John Zeedick Steven Ziegler cooking dinner at home with his family. Colleges, too, are grappling with the limits of this global online movement. Enthusiasts think open courses have the potential to uplift a nation of Zieglers by helping them piece together cheaper degrees from multiple institutions. But some worry that universities' projects may stall, because the recession and disappearing grant money are forcing colleges to confront a difficult question: What business model can support the high cost of giving away your "free" content?

    "With the economic downturn, I think it will be a couple of years before Yale or other institutions are likely to be able to make substantial investments in building out a digital course catalog," says Linda K. Lorimer, vice president and secretary at Yale, which is publishing a 36-class, greatest-hits-style video set called Open Yale Courses. Over the long term, she argues, such work will flourish.

    Maybe. But Utah State University recently mothballed its OpenCourseWare venture after running out of money from the state and from the William and Flora Hewlett Foundation, which has financed much of the open-content movement. Utah State had published a mix of lecture notes, syllabi, audio and video recordings from more than 80 courses, a collection thought to be the country's second-largest behind the pioneering, 1,940-class MIT OpenCourseWare project. The program needed only $120,000 a year to survive. But the economy was so bad that neither the university nor the state Legislature would pony up more money for a project whose mission basically amounted to blessing the globe with free course materials.

    'Dead by 2012' More free programs may run aground. So argues David Wiley, open education's Everywhere Man, who set up the Utah venture and is now an associate professor of instructional psychology and technology at Brigham Young University. A newspaper once likened him to Nostradamus for claiming that universities risked irrelevance by 2020. The education oracle offers another prophecy for open courseware. "Every OCW initiative at a university that does not offer distance courses for credit," he has blogged, "will be dead by the end of calendar 2012."

    In other words: Nice knowing you, MIT OpenCourseWare. So long, Open Yale Courses.

    "I think the economics of open courseware the way we've been doing it for the last almost decade have been sort of wrong," Mr. Wiley tells The Chronicle. Projects aimed for "the world," not bread-and-butter clientele like alumni and students. "Because it's not connected to any of our core constituencies, those programs haven't been funded with core funding. And so, in a climate where the economy gets bad and foundation funding slows, then that's a critical juncture for the movement."

    Stephen E. Carson, external-relations director of MIT's OpenCourseWare, chuckles at the 2012 prediction and chides Mr. Wiley as someone who "specializes in provocative statements." But ventures around the country are seriously exploring new business strategies. For some, it's fund raising à la National Public Radio; for others, hooking open content to core operations by dangling it as a gateway to paid courses.

    For elite universities, the sustainability struggle points to a paradox of opening access. If they do grant credentials, perhaps even a certificate, could that dilute their brands?

    "Given that exclusivity has come to be seen by some as a question of how many students a university can turn away, I don't see what's going to make the selective universities increase their appetite for risking their brands by offering credits for online versions of core undergraduate courses," says Roger C. Schonfeld, research manager at Ithaka S+R, a nonprofit group focused on technology in higher education that is studying online courseware.

    The answer may be that elites won't have to. Others can.

    Ever since MIT made its curriculum freely available online, its philanthropic feat has become a global trend. Colleges compete to add new classes to the Web's ever-growing free catalog. The result is a world where content and credentials no longer need to come from the same source. A freshman at Podunk U. can study with the world's top professors on YouTube. And within the emerging megalibrary of videos and syllabi and multimedia classes—a library of perhaps 10,000 courses—proponents see the building blocks of cheaper college options for self-teachers like Mr. Ziegler.

    The Great Unbundling How? When open-education advocates like MIT's Mr. Carson peer into their crystal balls, the images they see often hinge on one idea: the unbundling of higher education.

    The Great Higher Education Unbundling notion is over a decade old. It's picked up buzz lately, though, as media commentators compare the Internet's threat to college "conglomerates" with the way Web sites like Craigslist clawed apart the traditional functions of newspapers.

    Now take a university like MIT, where students pay about $50,000 a year for a tightly knit package of course content, learning experiences, certification, and social life. MIT OpenCourseWare has lopped off the content and dumped it in cyberspace. Eventually, according to Mr. Carson's take on the unbundling story, online learning experiences will emerge that go beyond just content. Consider Carnegie Mellon University's Open Learning Initiative, another darling of the movement, whose multimedia courses track students' progress and teach them with built-in tutors—no professor required.

    "And then, ultimately, I think there will be increasing opportunities in the digital space for certification as well," Mr. Carson says. "And that those three things will be able to be flexibly combined by savvy learners, to achieve their educational goals at relatively low cost."

    And social life? Don't we need college to tailgate and mate?

    "Social life we'll just forget about because there's Facebook," Mr. Wiley says. "Nobody believes that people have to go to university to have a social life anymore."

    Genre-Benders If the paragraphs you just read triggered an it'll-never-happen snort, take a look at what futurists like Mr. Wiley are trying—today—on the margins of academe.

    In August a global group of graduate students and professors went live with an online book-club-like experiment that layers the flesh of human contact on the bones of free content. At Peer 2 Peer University, course organizers act more like party hosts than traditional professors. Students are expected to essentially teach one another, and themselves.

    In September a separate institution started that also exploits free online materials and peer teaching. At University of the People, 179 first-term freshmen are already taking part in a project that bills itself as the world's first nonprofit, tuition-free, online university.

    Continued in article

    Bob Jensen's threads on open sharing videos, lectures and course materials available free from prestigious universities ---
    http://faculty.trinity.edu/rjensen/000aaa/updateee.htm#OKI

    Bob Jensen's threads on online assessment for grading and course credit ---
    http://faculty.trinity.edu/rjensen/assess.htm#OnlineOffCampus

    Bob Jensen's threads on online training and education alternatives ---
    http://faculty.trinity.edu/rjensen/Crossborder.htm

     


    Barbara gave me permission to post the following message on March 15, 2006
    My reply follows her message.

    Professor Jensen:

    I need your help in working with regulators who are uncomfortable with online education.

    I am currently on the faculty at the University of Dallas in Irving, Texas and I abruptly learned yesterday that the Texas State Board of Public Accountancy distinguishes online and on campus offering of ethics courses that it approves as counting for students to meet CPA candidacy requirements. Since my school offers its ethics course in both modes, I am suddenly faced with making a case to the TSBPA in one week's time to avoid rejection of the online version of the University of Dallas course.

    I have included in this email the "story" as I understand it that explains my situation. It isn't a story about accounting or ethics, it is a story about online education.

    I would like to talk to you tomorrow because of your expertise in distance education and involvement in the profession. In addition, I am building a portfolio of materials this week for the Board meeting in Austin March 22-23 to make a case for their approval (or at least not rejection) of the online version of the ethics course that the Board already accepts in its on campus version. I want to include compelling research-based material demonstrating the value of online learning, and I don't have time to begin that literature survey myself. In addition, I want to be able to present preliminary results from reviewers of the University of Dallas course about the course's merit in presentation of the content in an online delivery.

    Thank you for any assistance that you can give me.

    Barbara W. Scofield
    Associate Professor of Accounting
    University of Dallas
    1845 E Northgate Irving, TX 75062
    972-721-5034

    scofield@gsm.udallas.edu

    A statement of the University of Dallas and Texas State Board of Public Accountancy and Online Learning

    The TSBPA approved the University of Dallas ethics program in 2004. The course that was approved was a long-standing course, required in several different graduate programs, called Business Ethics. The course was regularly taught on campus (since 1995) and online (since 2001).

    The application for approval of the ethics course did not ask for information about whether the class was on campus or online and the syllabus that was submitted happened to be the syllabus of an on campus section. The TSBPA's position (via Donna Hiller) is that the Board intended to approve only the on campus version of the course, and that the Board inferred it was an on campus course because the sample syllabus that was submitted was an on campus course.

    Therefore the TSBPA (via Donna Hiller) is requiring that University of Dallas students who took the online version of the ethics course retake the exact same course in its on campus format. While the TSBPA (via Donna Hiller) has indicated that the online course cannot at this time be approved and its scheduled offering in the summer will not provide students with an approved course, Donna Hiller, at my request, has indicated that she will take this issue to the Board for their decision next week at the Executive Board Meeting on March 22 and the Board Meeting on March 23.

    There are two issues:

    1. Treatment of students who were relying on communication from the Board at the time they took the class that could reasonably have been interpreted to confer approval of both the online and on campus sections of the ethics course.

    2. Status of the upcoming summer online ethics class.

    My priority is establishing the status of the upcoming summer online ethics class. The Board has indicated through its pilot program with the University of Texas at Dallas that there is a place for online ethics classes in the preparation of CPA candidates. The University of Dallas is interested in providing the TSBPA with any information or assessment necessary to meet the needs of the Board to understand the online ethics class at the University of Dallas. Although not currently privy to the Board specific concerns about online courses, the University of Dallas believes that it can demonstrate sufficient credibility for the course because of the following factors:

    A. The content of the online course is the same as the on campus course. Content comparison can be provided. B. The instructional methods of the online course involve intense student-to-student, instructor-to-student, and student-to-content interaction at a level equivalent to an on campus course. Empirical information about interaction in the course can be provided.

    C. The instructor for the course is superbly qualified and a long-standing ethics instructor and distance learning instructor. The vita of the instructor can be provided.

    D. There are processes for course assessment in place that regularly prompt the review of this course and these assessments can be provided to the board along with comparisons with the on campus assessments.

    E. The University of Dallas will seek to coordinate with the work done by the University of Texas at Dallas to provide information at least equivalent to that provided by the University of Texas at Dallas and to meet at a minimum the tentative criteria for online learning that UT Dallas has been empowered to recommend to the TSBPA. Contact with the University of Texas at Dallas has been initiated.

    When the online ethics course is granted a path to approval by the Board, I am also interested in addressing the issue of TSBPA approval of students who took the class between the original ethics course approval date and March 13, 2006, the date that the University of Dallas became aware of the TSBPA intent (through Donna Hiller) that the TSBPA distinguished online and on campus ethics classes.

    The University of Dallas believes that the online class in fact provided these students with a course that completely fulfilled the general intent of the Board for education in ethics, since it is the same course as the approved on campus course (see above). The decision on the extent of commitment of the Board to students who relied on the Board's approval letter may be a legal issue of some sort that is outside of the current decision-making of the Board, but I want the Board take the opportunity to consider that the reasonableness of the students' position and the students' actual preparation in ethics suggest that there should also be a path created to approval of online ethics courses taken at the University of Dallas during this prior time period. The currently proposed remedy of a requirement for students to retake the very same course on campus that students have already taken online appears excessively costly to Texans and the profession of accounting by delaying the entry of otherwise qualified individuals into public accountancy. High cost is justified when the concomitant benefits are also high. However, the benefit to Texans and the accounting profession from students who retake the ethics course seems to exist only in meeting the requirements of regulations that all parties diligently sought to meet in the first place and not in producing any actual additional learning experiences.

    A reply to her from Bob Jensen

    Hi Barbara,

    May I share your questions and my responses in the next edition of New Bookmarks? This might be helpful to your efforts when others become informed. I will be in my office every day except for March 17. My phone number is 210-999-7347. However, I can probably be more helpful via email.

    As discouraging as it may seem, if students know what is expected of them and must demonstrate what they have learned, pedagogy does not seem to matter. It can be online or onsite. It can be lecture or cases. It can be no teaching at all if there are talented and motivated students who are given great learning materials. This is called the well-known “No Significant Difference” phenomenon --- http://www.nosignificantdifference.org/

    I think you should stress that insisting upon onsite courses is discriminatory against potential students whose life circumstances make it difficult or impossible to attend regular classes on campus.

    I think you should make the case that online education is just like onsite education in the sense that learning depends on the quality and motivations of the students, faculty, and university that sets the employment and curriculum standards for quality. The issue is not onsite versus online. The issue is quality of effort.

    The most prestigious schools like Harvard and Stanford and Notre Dame have a large number of credit and non-credit courses online. Entire accounting undergraduate and graduate degree programs are available online from such quality schools as the University of Wisconsin and the University of Maryland.  See my guide to online training and education programs is at http://faculty.trinity.edu/rjensen/crossborder.htm

    My main introductory document on the future of distance education is at http://faculty.trinity.edu/rjensen/000aaa/updateee.htm

    Anticipate and deal with the main arguments against online education. The typical argument is that onsite students have more learning interactions with themselves and with the instructor. This is absolutely false if the distance education course is designed to promote online interactions that do a better job of getting into each others’ heads.  Online courses become superior to onsite courses.

    Amy Dunbar teaches intensely interactive online courses with Instant Messaging. See Dunbar, A. 2004. “Genesis of an Online Course.” Issues in Accounting Education (2004),19 (3):321-343.

    ABSTRACT: This paper presents a descriptive and evaluative analysis of the transformation of a face-to-face graduate tax accounting course to an online course. One hundred fifteen students completed the compressed six-week class in 2001 and 2002 using WebCT, classroom environment software that facilitates the creation of web-based educational environments. The paper provides a description of the required technology tools and the class conduct. The students used a combination of asynchronous and synchronous learning methods that allowed them to complete the coursework on a self-determined schedule, subject to semi-weekly quiz constraints. The course material was presented in content pages with links to Excel® problems, Flash examples, audio and video files, and self-tests. Students worked the quizzes and then met in their groups in a chat room to resolve differences in answers. Student surveys indicated satisfaction with the learning methods.

    I might add that Amy is a veteran world class instructor both onsite and online. She’s achieved all-university awards for onsite teaching in at least three major universities. This gives her the credentials to judge how well her online courses compare with her outstanding onsite courses.

    A free audio download of a presentation by Amy Dunbar is available at
    http://www.cs.trinity.edu/~rjensen/002cpe/02start.htm#2002   

    The argument that students cannot be properly assessed for learning online is more problematic. Clearly it is easier to prevent cheating with onsite examinations. But there are ways of dealing with this problem.  My best example of an online graduate program that is extremely difficult is the Chartered Accountant School of Business (CASB) masters program for all of Western Canada. Students are required to take some onsite testing even though this is an online degree program. And CASB does a great job with ethics online. I was engaged to formally assess this program and came away extremely impressed. My main contact there is Don Carter carter@casb.com  .  If you are really serious about this, I would invite Don to come down and make a presentation to the Board. Don will convince them of the superiority of online education.

    You can read some about the CASB degree program at http://www.casb.com/

    You can read more about assessment issues at http://faculty.trinity.edu/rjensen/assess.htm

    I think a lot of the argument against distance education comes from faculty fearful of one day having to teach online. First there is the fear of change. Second there is the genuine fear that is entirely justified --- if online teaching is done well it is more work and strain than onsite teaching. The strain comes from increased hours of communication with each and every student.

    Probably the most general argument in favor of onsite education is that students living on campus have the social interactions and maturity development outside of class. This is most certainly a valid argument. However, when it comes to issues of learning of course content, online education can be as good as or generally better than onsite classes. Students in online programs are often older and more mature such that the on-campus advantages decline in their situations. Online students generally have more life, love, and work experiences already under their belts. And besides, you’re only talking about ethics courses rather than an entire undergraduate or graduate education.

    I think if you deal with the learning interaction and assessment issues that you can make a strong case for distance education. There are some “dark side” arguments that you should probably avoid. But if you care to read about them, go to http://faculty.trinity.edu/rjensen/000aaa/theworry.htm

    Bob Jensen

    March 15, 2006 reply from Bruce Lubich [BLubich@UMUC.EDU]

    Bob, as a director and teacher in a graduate accounting program that is exclusively online, I want to thank you for your support and eloquent defense of online education. Unfortunately, Texas's predisposition against online teaching also shows up in its education requirements for sitting for the CPA exam. Of the 30 required upper division accounting credits, at least 15 must "result from physical attendance at classes meeting regularly on the campus" (quote from the Texas State Board of Public Accountancy website at www.tsbpa.state.tx.us/eq1.htm)

    Cynically speaking, it seems the state of Texas wants to be sure its classrooms are occupied.

    Barbara, best of luck with your testimony.

    Bruce Lubich
    Program Director,
    Accounting Graduate School of Management and Technology
    University of Maryland University College

    March 15, 2006 reply from David Albrecht [albrecht@PROFALBRECHT.COM]

    At my school, Bowling Green, student credits for on-line accounting majors classes are never approved by the department chair. He says that you can't trust the schools that are offering these. When told that some very reputable schools are offering the courses, he still says no because when the testing process is done on-line or not in the physical presence of the professor the grades simply can't be trusted.

    David Albrecht

    March 16, 2006 reply from Bob Jensen

    Hi David,

    One tack against a luddites like that is to propose a compromise that virtually accepts all transfer credits from AACSB-accredited universities. It's difficult to argue that standards vary between online and onsite courses in a given program accredited by the AACSB. I seriously doubt that the faculty in that program would allow a double academic standard.

    In fact, on transcripts it is often impossible to distinguish online from onsite credits from a respected universities, especially when the same course is offered online and onsite (i.e., merely in different sections).

    You might explain to your department chair that he's probably been accepting online transfer credits for some time. The University of North Texas and other major universities now offer online courses to full-time resident students who live on campus. Some students and instructors find this to be a better approach to learning.

    And you ask him why Bowling Green's assessment rigor is not widely known to be vastly superior to online courses from nearly all major universities that now offer distance education courses and even total degree programs, including schools like the Fuqua Graduate School at Duke, Stanford University (especially computer science and engineering online courses that bring in over $100 million per year), the University of Maryland, the University of Wisconsin, the University of Texas, Texas Tech, and even, gasp, The Ohio State University.

    You might tell your department chair that by not offering some online alternatives, Bowling Green is not getting the most out of its students. The University of Illinois conducted a major study that found that students performed better in online versus onsite courses when matched pair sections took the same examinations.

    And then you might top it off by asking your department chair how he justifies denying credit for Bowling Green's own distance education courses --- http://adultlearnerservices.bgsu.edu/index.php?x=opportunities 
    The following is a quotation from the above Bowling Green site:

    *****************************
    The advancement of computer technology has provided a wealth of new opportunities for learning. Distance education is one example of technology’s ability to expand our horizons and gain from new experiences. BGSU offers many distance education courses and two baccalaureate degree completion programs online.

    The Advanced Technological Education Degree Program is designed for individuals who have completed a two-year applied associate’s degree. The Bachelor of Liberal Studies Degree Program is ideal for students with previous college credit who would like flexibility in course selection while completing a liberal education program.

    Distance Education Courses and Programs --- http://ideal.bgsu.edu/ONLINE/  ***************************

    Bob Jensen

    March 16, 2006 reply from Amy Dunbar [Amy.Dunbar@BUSINESS.UCONN.EDU]

    Count me in the camp that just isn't that concerned about online cheating. Perhaps that is because my students are graduate students and my online exams are open-book, timed exams, and a different version is presented to each student (much like a driver's license exam). In my end-of-semester survey, I ask whether students are concerned about cheating, and on occasion, I get one who is. But generally the response is no.

    The UConn accounting department was just reviewed by the AACSB, and they were impressed by our MSA online program. They commented that they now believed that an online MSA program was possible. I am convinced that the people who are opposed to online education are unwilling to invest the time to see how online education is implemented. Sure there will be bad examples, but there are bad examples of face to face (FTF) teaching. How many profs do you know who simply read powerpoint slides to a sleeping class?! Last semester, I received the School of Business graduate teaching award even though I teach only online classes. I believe that the factor that really matters is that the students know you care about whether they are learning. A prof who cares interacts with students. You can do that online as well as FTF.

    Do I miss FTF teaching -- you bet I do. But once I focused on what the student really needs to learn, I realized, much to my dismay, interacting FTF with Dunbar was not a necessary condition.

    Amy Dunbar

    March 16, 2006 message from Carol Flowers [cflowers@OCC.CCCD.EDU]

    To resolve this issue and make me more comfortable with the grade a student earns, I have all my online exams proctored. I schedule weekends (placing them in the schedule of classes) and it is mandatory that they take the exams during this weekend period (Fir/Sat) at our computing center. It is my policy that if they can't take the paced exams during those periods, then the class is not one that they can participate in. This is no different from having different times that courses are offered. They have to make a choice in that situation, also, as to which time will best serve their needs.

    March 16, 2006 reply from David Fordham, James Madison University [fordhadr@JMU.EDU]

    Our model is similar to Carol Flowers. Our on-line MBA program requires an in-person meeting for four hours at the beginning of every semester, to let the students and professor get to know each other personally, followed by the distance-ed portion, concluding with another four-hour in- person session for the final examination or other assessment. The students all congregate at the Sheraton at Dulles airport, have dinner together Friday night, spend Saturday morning taking the final for their previous class, and spend Saturday afternoon being introduced to their next class. They do this between every semester. So far, the on- line group has outperformed (very slightly, and not statistically significant due to small sample sizes) the face-to-face counterparts being used as our control groups. We believe the outperformance might have an inherent self- selection bias since the distance-learners are usually professionals, whereas many of our face-to-face students are full-time students and generally a bit younger and more immature.

    My personal on-line course consists of exactly the same readings as my F2F class, and exactly the same lectures (recorded using Tegrity) provided on CD and watched asynchronously, followed by on-line synchronous discussion sessions (2-3 hours per week) where I call on random students asking questions about the readings, lectures, etc., and engaging in lively discussion. I prepare some interesting cases and application dilemmas (mostly adapted from real world scenarios) and introduce dilemmas, gray areas, controversy (you expected maybe peace and quiet from David Fordham?!), and other thought-provoking issues for discussion. I have almost perfect attendance in the on-line synchronous because the students really find the discussions engaging. Surprisingly, I have no problem with freeloaders who don't read or watch the recorded lectures. My major student assessment vehicle is an individual policy manual, supplemented by the in-person exam. Since each student's manual organization, layout, approach, and perspective is so very different from the others, cheating is almost out of the question. And the in-person exam is conducted almost like the CISP or old CPA exams... total quiet, no talking, no leaving the room, nothing but a pencil, etc.

    And finally, no, you can't tell the difference on our student's transcript as to whether they took the on-line or in-person MBA. They look identical on the transcript.

    We've not yet had any problem with anyone "rejecting" our credential that I'm aware of.

    Regarding our own acceptance of transfer credit, we make the student provide evidence of the quality of each course (not the degree) before we exempt or accept credit. We do not distinguish between on-line or F2F -- nor do we automatically accept a course based on institution reputation. We have on many occasions rejected AACSB- accredited institution courses (on a course-by-course basis) because our investigation showed that the course coverage or rigor was not up to the standard we required. (The only "blanket" exception that we make is for certain familiar Virginia community college courses in the liberal studies where history has shown that the college and coursework reliably meets the standards -- every other course has to be accepted on a course-by-course basis.)

    Just our $0.02 worth.

    David Fordham
    James Madison University


    DOES DISTANCE LEARNING WORK?
    A LARGE SAMPLE, CONTROL GROUP STUDY OF STUDENT SUCCESS IN DISTANCE LEARNING
    by James Koch --- http://www.usq.edu.au/electpub/e-jist/docs/vol8_no1/fullpapers/distancelearning.htm

    The relevant public policy question is this---Does distance learning "work" in the sense that students experience as least as much success when they utilize distance learning modes as compared to when they pursue conventional bricks and mortar education? The answer to this question is a critical in determining whether burgeoning distance learning programs are cost-effective investments, either for students, or for governments.

    Of course, it is difficult to measure the "learning" in distance learning, not the least because distance learning courses now span nearly every academic discipline. Hence, most large sample evaluative studies utilize students’ grades as an imperfect proxy for learning. That approach is followed in the study reported here, as well.

    A recent review of research in distance education reported that 1,419 articles and abstracts appeared in major distance education journals and as dissertations during the 1990-1999 period (Berge and Mrozowski, 2001). More than one hundred of these studies focused upon various measures of student success (such as grades, subsequent academic success, and persistence) in distance learning courses. Several asked the specific question addressed in this paper: Why do some students do better than others, at least as measured by the grade they receive in their distance learning course? A profusion of contradictory answers has emanated from these studies (Berge and Mrozowski, 2001; Machtmes and Asher, 2000). It is not yet clear how important to individual student success are factors such as the student’s characteristics (age, ethnic background, gender, academic background, etc.). However, other than knowing that experienced faculty are more effective than less experienced faculty (Machtmes and Asher, 2000), we know even less about how important the characteristics of distance learning faculty are to student success, particularly where televised, interactive distance learning is concerned.

    Perhaps the only truly strong conclusion emerging from previous empirical studies of distance learning is the oft cited "no significant difference" finding (Saba, 2000). Indeed, an entire web site, http://teleeducation.nb.ca/nosignificantdifference, exists that reports 355 such "no significant difference" studies. Yet, without quarreling with such studies, they do not tell us why some students achieve better grades than others when they utilize distance learning.

    Several studies have suggested that student learning styles and receptivity to distance learning influence student success (see Taplin and Jegede, 2001, for a short survey). Unfortunately, as Maushak et. al. (2001) point out, these intuitively sensible findings are not yet highly useful, because they are not based upon large sample, control group evidence that relates recognizable student learning styles to student performance. Studies that rely upon "conversation and discourse analysis" (Chen and Willits, 1999, provide a representative example) and interviews with students are helpful, yet are sufficiently anecdotal that they are unlikely to lead us to scientifically based conclusions about what works and what does not.

    This paper moves us several steps forward in terms of our knowledge by means of a very large distance education sample (76,866 individual student observations) and an invaluable control group of students who took the identical course at the same time from the same instructor, but did so "in person" in a conventional "bricks and mortar" location. The results indicate that gender, age, ethnic background, distance learning experience, experience with the institution providing the instruction, and measures of academic aptitude and previous academic success are statistically significant determinants of student success. Similarly, faculty characteristics such as gender, age, ethnic background, and educational background are statistically significant predictors of student success, though not necessarily in the manner one might hypothesize.

    Continued in this working paper


    January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

    No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

    The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


    DISTANCE LEARNING AND FACULTY CONCERNS

    Despite the growing number of distance learning programs, faculty are often reluctant to move their courses into the online medium. In "Addressing Faculty Concerns About Distance Learning" (ONLINE JOURNAL OF DISTANCE LEARNING ADMINISTRATION, vol. VIII, no. IV, Winter 2005) Jennifer McLean discusses several areas that influence faculty resistance, including: the perception that technical support and training is lacking, the fear of being replaced by technology, and the absence of a clearly-understood institutional vision for distance learning. The paper is available online at
    http://www.westga.edu/%7Edistance/ojdla/winter84/mclean84.htm

    The Online Journal of Distance Learning Administration is a free, peer-reviewed quarterly published by the Distance and Distributed Education Center, The State University of West Georgia, 1600 Maple Street, Carrollton, GA 30118 USA; Web: http://www.westga.edu/~distance/jmain11.html

    Bob Jensen's threads on faculty concerns are at http://faculty.trinity.edu/rjensen/assess.htm

    Also see Bob Jensen's threads on the dark side at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm


     .QUESTIONING THE VALUE OF LEARNING TECHNOLOGY

    "The notion that the future of education lies firmly in learning technology, seen as a tool of undoubted magnitude and a powerful remedy for many educational ills, has penetrated deeply into the psyche not only of those involved in delivery but also of observers, including those in power within national governments." In a paper published in 1992, Gabriel Jacobs expressed his belief that hyperlink technology would be a "teaching resource that would transform passive learners into active thinkers." In "Hypermedia and Discovery Based Learning: What Value?" (AUSTRALASIAN JOURNAL OF EDUCATIONAL TECHNOLOGY, vol. 21, no. 3, 2005, pp. 355-66), he reconsiders his opinions, "the result being that the guarded optimism of 1992 has turned to a deep pessimism." Jacob's paper is available online at http://www.ascilite.org.au/ajet/ajet21/jacobs.html .

    The Australasian Journal of Educational Technology (AJET) [ISSN 1449-3098 (print), ISSN 1449-5554 (online)], published three times a year, is a refereed journal publishing research and review articles in educational technology, instructional design, educational applications of computer technologies, educational telecommunications, and related areas. Back issues are available on the Web at no cost. For more information and back issues go to http://www.ascilite.org.au/ajet/ajet.html .

    See Bob Jensen's threads on the dark side at http://faculty.trinity.edu/rjensen/000aaa/theworry.htm


    June 1, 2007 message from Carolyn Kotlas [kotlas@email.unc.edu]

    TEACHING THE "NET GENERATION"

    The April/May 2007 issue of INNOVATE explores and explains the learning styles and preferences of Net Generation learners. "Net Generation learners are information seekers, comfortable using technology to seek out information, frequently multitasking and using multiple forms of media simultaneously. As a result, they desire independence and autonomy in their learning processes."

    Articles include:

    "Identifying the Generation Gap in Higher Education: Where Do theDifferences Really Lie?"
    by Paula Garcia and Jingjing Qin, Northern Arizona University

    "MyLiteracies: Understanding the Net Generation through LiveJournals and Literacy Practices"
    by Dana J. Wilber, Montclair State University

    "Is Education 1.0 Ready for Web 2.0 Students?"
    by John Thompson,Buffalo State College

    The issue is available at http://innovateonline.info/index.php.

    Registration is required to access articles; registration is free.

    Innovate: Journal of Online Education [ISSN 1552-3233], an open-access, peer-reviewed online journal, is published bimonthly by the Fischler School of Education and Human Services at Nova Southeastern University.

    The journal focuses on the creative use of information technology (IT) to enhance educational processes in academic, commercial, and governmental settings. For more information, contact James L. Morrison, Editor-in-Chief; email: innovate@nova.edu ;
    Web:  http://innovateonline.info/.

    The journal also sponsors Innovate-Live webcasts and discussion forums that add an interactive component to the journal articles. To register for these free events, go to http://www.uliveandlearn.com/PortalInnovate/.

    See also:

    "Motivating Today's College Students"
    By Ian Crone
    PEER REVIEW, vol. 9, no. 1, Winter 2007

    http://www.aacu.org/peerreview/pr-wi07/pr-wi07_practice.cfm

    Peer Review, published quarterly by the Association of American Colleges and Universities (AACU), provides briefings on "emerging trends and key debates in undergraduate liberal education. Each issue is focused on a specific topic, provides comprehensive analysis, and highlights changing practice on diverse campuses." For more information, contact: AACU, 1818 R Street NW, Washington, DC 20009 USA;

    tel: 202-387-3760; fax: 202-265-9532;
    Web: 
    http://www.aacu.org/peerreview/.

    For a perspective on educating learners on the other end of the generational continuum see:

    "Boomer Reality"
    By Holly Dolezalek
    TRAINING, vol. 44, no. 5, May 2007

    http://www.trainingmag.com/msg/content_display/publications/e3if330208bec8f4014fac339db9fd0678e

    Training [ISSN 0095-5892] is published monthly by Nielsen Business Media, Inc., 770 Broadway, New York, NY 10003-9595 USA;
    tel: 646-654-4500; email:
    bmcomm@nielsen.com ;
    Web:  http://www.trainingmag.com.

    Bob Jensen's threads on learning can be found at the following Web sites:

    http://faculty.trinity.edu/rjensen/assess.htm

    http://faculty.trinity.edu/rjensen/255wp.htm

    http://faculty.trinity.edu/rjensen/265wp.htm

    http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

     


    June 1, 2007 message from Carolyn Kotlas [kotlas@email.unc.edu]

    TECHNOLOGY AND CHANGE IN EDUCATIONAL PRACTICE

    "Even if research shows that a particular technology supports a certain kind of learning, this research may not reveal the implications of implementing it. Without appropriate infrastructure or adequate provisions of services (policy); without the facility or ability of teachers to integrate it into their teaching practice (academics); without sufficient support from technologists and/or educational technologists (support staff), the likelihood of the particular technology or software being educationally effective is questionable."

    The current issue (vol. 19, no. 1, 2007) of the JOURNAL OF EDUCATIONAL TECHNOLOGY & SOCIETY presents a selection of papers from the Conference Technology and Change in Educational Practice which was held at the London Knowledge Lab, Institute of Education, London in October 2005.

    The papers cover three areas: "methodological frameworks, proposing new ways of structuring effective research; empirical studies, illustrating the ways in which technology impacts the working roles and practices in Higher Education; and new ways of conceptualising technologies for education."

    Papers include:

    "A Framework for Conceptualising the Impact of Technology on Teaching and Learning"
    by Sara Price and Martin Oliver, London Knowledge Lab, Institute of Education

    "New and Changing Teacher Roles in Higher Education in a Digital Age"
    by Jo Dugstad Wake, Olga Dysthe, and Stig Mjelstad, University of Bergen

    "Academic Use of Digital Resources: Disciplinary Differences and the Issue of Progression Revisited"
    by Bob Kemp, Lancaster University, and Chris Jones, Open University

    "The Role of Blogs In Studying the Discourse and Social Practices of Mathematics Teachers"
    by Katerina Makri and Chronis Kynigos, University of Athens

    The issue is available at http://www.ifets.info/issues.php?show=current.

    The Journal of Educational Technology and Society [ISSN 1436-4522]is a peer-reviewed, quarterly publication that "seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage such systems." Current and back issues are available at http://www.ifets.info/. The journal is published by the International Forum of Educational Technology & Society. For more information, see http://ifets.ieee.org/.

    Bob Jensen's threads on blogs and listservs are at http://faculty.trinity.edu/rjensen/ListservRoles.htm

    Bob Jensen's threads on education technologies are at http://faculty.trinity.edu/rjensen/000aaa/0000start.htm

    Bob Jensen's threads on distance education and training alternatives are at http://faculty.trinity.edu/rjensen/Crossborder.htm


    Civil Rights Groups That Favor Standardized Testing

    "Teachers and Rights Groups Oppose Education Measure ," by Diana Jean Schemo, The New York Times, September 11, 2007 --- http://www.nytimes.com/2007/09/11/education/11child.html?_r=1&oref=slogin

    The draft House bill to renew the federal No Child Left Behind law came under sharp attack on Monday from civil rights groups and the nation’s largest teachers unions, the latest sign of how difficult it may be for Congress to pass the law this fall.

    At a marathon hearing of the House Education Committee, legislators heard from an array of civil rights groups, including the Citizens’ Commission on Civil Rights, the National Urban League, the Center for American Progress and Achieve Inc., a group that works with states to raise academic standards.

    All protested that a proposal in the bill for a pilot program that would allow districts to devise their own measures of student progress, rather than using statewide tests, would gut the law’s intent of demanding that schools teach all children, regardless of poverty, race or other factors, to the same standard.

    Dianne M. Piché, executive director of the Citizens’ Commission on Civil Rights, said the bill had “the potential to set back accountability by years, if not decades,” and would lead to lower standards for children in urban and high poverty schools.

    “It strikes me as not unlike allowing my teenage son and his friends to score their own driver’s license tests,” Ms. Piché said, adding, “We’ll have one set of standards for the Bronx and one for Westchester County, one for Baltimore and one for Bethesda.”

    Continued in article


    "Obama’s Union-Friendly, Feel-Good Approach to Education." by Kyle Olson, Townhall, March 30, 2011 ---
    http://townhall.com/columnists/kyleolson/2011/03/30/obama%E2%80%99s_union-friendly,_feel-good_approach_to_education

  • The Obama administration, principally the president and Education Secretary Arne Duncan, are now routinely making public statements which are leading to one conclusion: instead of fixing American education, we should dumb down the standards.

    According to the Associated Press, President Obama “is pushing a rewrite of the nation’s education law that would ease some of its rigid measurement tools” and wants “a test that ‘everybody agrees makes sense’ and administer it in less pressure-packed atmospheres, potentially every few years instead of annually.”

    The article goes on to say that Obama wants to move away from proficiency goals in math, science and reading, in favor of the ambiguous and amorphous goals of student readiness for college and career.

    Obama’s new focus comes on the heels of a New York Times report that 80% of American public schools could be labeled as failing under the standards of No Child Left Behind.

    Put another way: the standards under NCLB have revealed that the American public education system is full of cancer. Instead of treating the cancer, Obama wants to change the test, as if ignoring the MRI somehow makes the cancer go away.

    So instead of implementing sweeping policies to correct the illness, Obama is suggesting that we just stop testing to pretend it doesn’t exist.

    If Obama were serious about curing the disease, one of the best things he could do is to ensure that there is a quality teacher in every classroom in America. Of course, that would mean getting rid teacher tenure and scrapping seniority rules that favor burned-out teachers over ambitious and innovative young teachers.

    That means standing up to the teacher unions. For a while, it looked like Obama would get tough with the unions, but not anymore. With a shaky economy and three wars, it looks like Obama’s re-election is in serious jeopardy. He needs all hands on deck – thus the new union-friendly education message.

    Obama’s new direction will certainly make the unionized adults happy. They’ve hated NCLB from the get-go.

    And the unions will love Obama’s talk about using criteria other than standardized testing in evaluating schools.

    He doesn’t get specific, of course, but I bet I can fill in the gaps. If testing is too harsh, perhaps we can judge students and schools based on how hard they try or who can come up with the most heart-wrenching excuse for failure or how big the dog was that ate their homework.

    Continued in article

  • "Department of Injustice," by Walter E. Williams, Townhall, March 30. 2011 ---
    http://townhall.com/columnists/walterewilliams/2011/03/30/department_of_injustice

    One of the requirements to become a Dayton, Ohio police officer is to successfully pass the city's two-part written examination. Applicants must correctly answer 57 of 86 questions on the first part (66 percent) and 73 of 102 (72 percent) on the second part. Dayton's Civil Service Board reported that 490 candidates passed the November 2010 written test, 57 of whom were black. About 231 of the roughly 1,100 test takers were black.

    The U.S. Department of Justice, led by Attorney General Eric Holder, rejected the results of Dayton's Civil Service examination because not enough blacks passed. The DOJ has ordered the city to lower the passing score. The lowered passing grade requires candidates to answer 50 of 86 (58 percent) questions correctly on the first part and 64 of 102 (63 percent) of questions on the second. The DOJ-approved scoring policy requires potential police officers to earn the equivalent of an "F" on the first part and a "D" on the second. Based on the DOJ-imposed passing scores, a total of 748 people, 258 more than before, were reported passing the exam. Unreported was just how many of the 258 are black.

    Keith Lander, chairman of the Dayton chapter of the Southern Christian Leadership Conference, and Dayton NAACP president Derrick Foward condemned the DOJ actions.

    Mr. Lander said, "Lowering the test score is insulting to black people," adding, "The DOJ is creating the perception that black people are dumb by lowering the score. It's not accomplishing anything."

    Mr. Foward agreed and said, "The NAACP does not support individuals failing a test and then having the opportunity to be gainfully employed," adding, "If you lower the score for any group of people, you're not getting the best qualified people for the job."

    I am pleased by the positions taken by Messrs. Lander and Foward. It is truly insulting to suggest that black people cannot meet the same standards as white people and somehow justice requires lower standards. Black performance on Dayton's Civil Service exam is really a message about fraudulent high school diplomas that many black students receive.

    Continued in article

     


    What works in education?

    As I said previously, great teachers come in about as many varieties as flowers.  Click on the link below to read about some of the varieties recalled by students from their high school days.  I t should be noted that "favorite teacher" is not synonymous with "learned the most."  Favorite teachers are often great at entertaining and/or motivating.  Favorite teachers often make learning fun in a variety of ways.  

    However, students may actually learn the most from pretty dull teachers with high standards and demanding assignments and exams.  Also dull teachers may also be the dedicated souls who are willing to spend extra time in one-on-one sessions or extra-hour tutorials that ultimately have an enormous impact on mastery of the course.  And then there are teachers who are not so entertaining and do not spend much time face-to-face that are winners because they have developed learning materials that far exceed other teachers in terms of student learning because of those materials.  

    The recollections below tend to lean toward entertainment and "fun" teachers, but you must keep in mind that these were written after-the-fact by former high school teachers.  In high school, dull teachers tend not to be popular before or after the fact.  This is not always the case when former students recall their college professors.

    Handicapped Learning Aids Work Wonders --- http://faculty.trinity.edu/rjensen/000aaa/thetools.htm#Handicapped

    Asynchronous Learning --- http://faculty.trinity.edu/rjensen/255wp.htm
    Especially note the SCALE Experiments conducted at the University of Illinois ---
    http://faculty.trinity.edu/rjensen/255wp.htm#Illinois
     

    "'A dozen roses to my favorite teacher," The Philadelphia Enquirer, November 30, 2004 --- http://www.philly.com/mld/inquirer/news/special_packages/phillycom_teases/10304831.htm?1c 

    January 6, 2006 message from Carolyn Kotlas [kotlas@email.unc.edu]

    No Significant Difference Phenomenon website http://www.nosignificantdifference.org/ 

    The website is a companion piece to Thomas L. Russell's book THE NO SIGNIFICANT DIFFERENCE PHENOMENON, a bibliography of 355 research reports, summaries, and papers that document no significant differences in student outcomes between alternate modes of education delivery.


    Mathematics Assessment Project (learning assessment) ---http://map.mathshell.org


    American Council on Education - GED Testing --- http://www.acenet.edu/Content/NavigationMenu/ged/index.htm



    Classroom Tips
    Yes! 50 Scientifically Proven Ways to Be Persuasive

    From the Financial Rounds Blog on May 4, 2009 --- http://financialrounds.blogspot.com/

    Using "Yes! 50 Scientifically Proven Ways to Be Persuasive" In The Classroom I recently started reading Goldstein, Martin, and Cialdini's "Yes!: 50 Scientifically Proven Ways to Be Persuasive." It could easily be described as "Freakonomics for Social Psychology". It's a fun, easy, and very informative read, with each chapter only about 1500-2000 words long, and highlighting one persuasion technique. So, you can knock out a chapter in 10 minutes or so.

    It's a very interesting introduction to the social psychology literature on persuasion - it lists all the underlying research in the appendix.

    In addition to learning some interesting things, I've also gotten some great ideas to use in my classes. I'll be discussing these over the next few weeks, starting with

    Chapters 1 & 2:
    "The Bandwagon effect" One way to increase compliance with a request is to mention that a lot of other people have done the same thing. In these chapters, the authors mention a study where they tried to see if they could increase the percentage of people staying in a hotel who reused towels at least once during their stay. Their solution was simple. The hotels who do this typically put a little card in the hotel room touting the benefits of reusing towels. All they did was add a line to the extent that the majority of people who stay in hotels do in fact reuse their towels at least once during their stay. This dramatically increased the percentage of people who chose to reuse