In 2017 my Website was migrated to the clouds and reduced in size.
Hence some links below are broken.
Contact me at rjensen@trinity.edu if you really need to file that is missing.

 

Great Minds in Management:  The Process of Theory Development
Bob Jensen at Trinity University
Homepage:  http://faculty.trinity.edu/rjensen

In April 2006 I commenced reading a heavy book entitled Great Minds in Management:  The Process of Theory Development, Edited by Ken G. Smith and Michael A. Hitt (Oxford Press, 2006).

The essays are somewhat personalized in terms of how theory development is perceived by each author and how these perceptions changed over time.

Below I will share some of the key quotations as I proceed through this book. The book is somewhat heavy going, so it will take some time to add selected quotations to the list below.

Bob Jensen

"Cornell Theory Center Aids Social Science Researchers," PR Web, June 19, 2006 --- http://www.prweb.com/releases/2006/6/prweb400160.htm

The following "great minds" contributed to this book:

 

  PART I INDIVIDUALS AND THEIR ENVIRONMENT  

2.

The Evolution of Social Cognitive Theory--ALBERT BANDURA

9

3.

Image Theory--LEE R. BEACH and TERENCE R. MITCHELL

36

4.

The Road to Fairness and Beyond--ROBERT FOLGER

55

5.

Grand Theories and Mid-Range Theories: Cultural Effects on Theorizing and the Attempt to Understand Active Approaches to Work--MICHAEL FRESE

84

6.

Upper Echelons Theory: Origins, Twists and Turns, and Lessons Learned--
DONALD C. HAMBRICK

109

7.

Goal Setting Theory: Theory Building by Induction--EDWIN A. LOCKE and GARY P. LATHAM 128
8. How Job Characteristics Theory Happened--GREG R. OLDHAM and J. RICHARD HACKMAN 151
9. Do Employee Attitudes towards Organizations Matter? The Study of Employee Commitment to Organizations--
LYMAN W. PORTER, RICHARD M. STEERS and RICHARD T. MOWDAY
171
10. Developing Psychological Contract Theory--DENISE M. ROUSSEAU 190
11. The Escalation of Commitment: Steps toward an Organizational Theory--BARRY M. STAW 215
12. On the Origins of Expectancy Theory--VICTOR H. VROOM 239
  PART II BEHAVIOR OF ORGANIZATIONS  
13. Double-Loop Learning in Organizations: A Theory of Action Perspective--CHRIS ARGYRIS 261
14. Where does Inequality Come from? The Personal and Intellectual Roots of Resource-Based Theory--JAY B. BARNEY 280
15. Organizational Effectiveness: Its Demise and Re-emergence through Positive Organizational Scholarship--KIM CAMERON 304
16. Managerial and Organizational Cognition: Islands of Coherence--ANNE S. HUFF 331
17. Developing Theory about the Development of Theory--HENRY MINTZBERG 355
18. Managing Organizational Knowledge: Theoretical and Methodological Foundations--IKUJIRO NONAKA 373
19. The Experience of Theorizing: Sensemaking as Topic and Resource--KARL E. WEICK 394
  PART III ENVIRONMENTAL CONTINGENCIES AND ORGANIZATIONS  
20. The Development of Stakeholder Theory: An Idiosyncratic Approach--R. EDWARD FREEMAN 417
21. Developing Resource Dependence Theory: How Theory is Affected by its Environment--JEFFREY PFEFFER 436
22. Institutional Theory: Contributing to a Theoretical Research Program--W. RICHARD SCOTT 460
23. Transaction Cost Economics: The Process of Theory Development--OLIVER E. WILLIAMSON 485
24. Developing Evolutionary Theory for Economics and Management--SIDNEY G. WINTER 509
25. An Evolutionary Approach to Institutions and Social Construction: Process and Structure--LYNN G. ZUCKER and MICHAEL R. DARBY 547
26. Epilogue: Learning to Develop Theory from the Masters--KEN G. SMITH and MICHAEL A. HITT 572

 

LIST OF CONTRIBUTORS
Chris Argyris Harvard University, USA
Albert Bandura Stanford University, USA
Jay B. Barney Ohio State University, USA
Lee R. Beach University of Arizona, USA
Kim Cameron University of Michigan, USA
Michael R. Darby University of California, Los Angeles, USA
Robert Folger University of Central Florida, USA
R. Edward Freeman University of Virginia, USA
Michael Frese Giessen University, Germany
J. Richard Hackman Harvard University, USA
Donald C. Hambrick Pennsylvania State University, USA
Michael A. Hitt Texas A&M University, USA
Anne S. Huff Technische Universitat Munchen, Germany
Gary P. Lathman University of Toronto, Canada
Edwin A. Locke University of Maryland, USA
Henry Mintzberg McGill University, Canada
Terrence R. Mitchell University of Florida, USA
Richard T. Mowday University of Oregon, USA
Ikujiro Nonaka Hitotsubashi University, Japan
Greg R. Oldham University of Illinois, Urbana Champaign, USA
Jeffrey Pfeffer Stanford University, USA
Lyman W. Porter University of California, Irvine, USA
Denise M. Rousseau Carnegie-Mellon University, USA
W. Richard Scott Stanford University, USA
Ken G. Smith University of Maryland, USA
Barry M. Staw University of California, Berkeley, USA
Richard M. Steers University of Oregon, USA
Victor M. Vroom Yale University, USA
Karl E. Weick University of Michigan, USA
Oliver E. Williamson University of California, Berkeley, USA
Sidney G. Winter University of Pennsylvania, USA
Lynne G. Zucker University of California, Los Angeles, USA

Where did the giant Harvard economist John Kenneth Galbraith fail?


Leaders and the Responsibility of Power


"A Wisdom 101 Course!" February 15, 2010 ---
http://www.simoleonsense.com/a-wisdom-101-course/

"Overview of Prior Research on Wisdom," Simoleon Sense, February 15, 2010 ---
http://www.simoleonsense.com/overview-of-prior-research-on-wisdom/

Modern Science and Ancient Wisdom  --- http://faculty.trinity.edu/rjensen/theory01.htm#AncientWisdom

"An Overview Of The Psychology Of Wisdom," Simoleon Sense, February 15, 2010 ---
http://www.simoleonsense.com/an-overview-of-the-psychology-of-wisdom/

"Why Bayesian Rationality Is Empty, Perfect Rationality Doesn’t Exist, Ecological Rationality Is Too Simple, and Critical Rationality Does the Job,"
Simoleon Sense, February 15, 2010 --- Click Here
http://www.simoleonsense.com/why-bayesian-rationality-is-empty-perfect-rationality-doesn%e2%80%99t-exist-ecological-rationality-is-too-simple-and-critical-rationality-does-the-job/

 

Bob Jensen's threads on accounting theory --- http://faculty.trinity.edu/rjensen/theory.htm


 

The Evolution of Social Cognitive Theory
ALBERT BANDURA

 

PG#19 BANDURA
Fortuitous events got me into psychology and my marital partnership.  I initially planned to study the biological sciences.  I was in a car pool with pre-meds and engineers who enrolled in classes at an unmercifully early hour.  While waiting for my English class I flipped through a course catalogue that happened to be left on a table in the library.  I noticed an introductory psychology course that would serve as an early time filler.  I enrolled in the course and found my future profession.  It was during my graduate school years at the University of Iowa that I met my wife through a fortuitous encounter.  My friend and I were quite late getting to the golf course one Sunday.  We were bumped to an afternoon starting time.  There were two women ahead of us.  They were slowing down.  We were speeding up.  Before long we became a genial foursome.  I met my wife in a sand trap.  Our lives would have taken entirely different courses had I showed up at the early scheduled time.

Some years ago I delivered a presidential address at the Western Psychological Convention on the psychology of chance encounters and life paths (Bandura, 1982).  At the convention the following year, an editor of one of the publishing houses explained that he had entered the lecture hall as it was rapidly filling up and seized an empty chair near the entrance.  In the coming week, he will be marrying the woman who happened to be seated next to him.  With only a momentary change in time of entry, seating constellations would have altered and this intersect would not have occurred.  A marital partnership was, thus, fortuitously formed at a talk devoted to fortuitous determinants of life paths!

Fortuitous influences are ignored in the casual structure of the social sciences even though they play an important role in life courses.  Most fortuitous events leave people untouched, others have some lasting effects, and still others branch people into new trajectories of life.  A science of psychology does not have much to say about the occurrence of fortuitous intersects, except that personal proclivities, the nature of the settings in which one moves, and the types of people who populate those settings make some types of intersects more probable than others.  Fortuitous influences may be unforeseeable, but having occurred, they enter as contributing factors in casual chains in the same way as prearranged ones do.  Psychology can gain the knowledge for predicting the nature, scope, and strength of the impact these encounters will have on human lives.  I took the fortuitous character of life seriously, provided a preliminary conceptual scheme for predicting the psychosocial impact of such events, and specified ways in which people can capitalize agentically on fortuitous opportunities (Bandura, 1982, 1998).

PG.#24 BANDURA
Scientific advances are promoted by two kinds of theories (Nagel, 1961).  One form seeks relations between directly observable events but shies away from the mechanisms subserving the observable events.  The second form focuses on the mechanisms that explain the functional relations between observable events.  The fight over cognitive determinants was not about the legitimacy of inner causes, but about the types of inner determinants that are favored (Bandura, 1996).  For example, operant analysts increasingly place the explanatory burden on determinants inside the organism, namely the implanted history of reinforcements.

PG.#27 BANDURA
Not only are cultures not monolithic entities, but they are no longer insular.  Global connectivity is shrinking cross-cultural uniqueness.  Moreover, people worldwide are becoming increasingly enmeshed in a cyberworld that transcends time, distance, place, and national borders.  In addition, mass transnational influences are homogenizing some aspects of life, polarizing other aspects, and creating a lot of cultural hybridizations fusing elements from diverse cultures.

PG.#29 BANDURA
Controlled field studies that systematically vary psychosocial factors under real-life conditions provide greater ecological validity, but they too are limited in scope.  Finite resources, limits imposed by social systems on what types of interventions they permit, hard to control fluctuations in quality of implementation, and ethical considerations place constraints on controlled field interventions.  Controlled experimentation must, therefore, be supplemented with investigation of naturally produced variations in psychosocial functioning linked to identifiable determinants (Nagel, 1961).  The latter approach is indispensable in the social sciences.

Verification of functional relations requires converging evidence from different research strategies.  Therefore, in the development of social cognitive theory, we have employed controlled laboratory studies, controlled field studies, longitudinal studies, behavior modification of human dysfunctions not producible on ethical grounds, and analyses of functional relations in naturally occurring phenomena.  These studies have included populations of diverse sociodemographic characteristics, multiple analytic methodologies, applied across diverse spheres of functioning in diverse cultural milieus.

PG.#30 BANDURA
It is one thing to generate innovative ideas that hold promise for advancing knowledge, but another to get them published.  The publication process, therefore, warrants brief comment from the trenches.  Researchers have a lot of psychic scar tissue from inevitable skirmishes with journal reviewers.  This presents special problems when there is conceptual inbreeding in editorial boards.  The path to innovative accomplishments is strewn with publication hassles and rejections.

It is not uncommon for authors of scientific classics to experience repeated initial rejection of their work, often with hostile embellishments if it is too discordant with what is in vogue (Companario, 1995).  The intellectual contributions later become the mainstays of the field of study.  For example, John Garcia, who eventually was honored for his fundamental psychological discoveries, was once told by a reviewer of his often-rejected manuscripts that one is no more likely to find the phenomenon he discovered than bird droppings in a cuckoo clock.

Gans and Shepherd (1994) asked leading economists, including Nobel Prize winners, to describe their experiences with the publication process.  Their request brought a cathartic outpouring of accounts of publication troubles, even with seminal contributions.  The publication hassles are an unavoidable but frustrating part of a research enterprise.  The next time you have one of your ideas, prized projects, or manuscripts rejected, do not despair too much.  Take comfort in the fact that those who have gone on to fame have had a rough time.  In his delightful book Rejection, John White (1982) vividly documents that the prominent characteristic of people who achieve success in challenging pursuits in an unshakable sense of efficacy and a firm belief in the worth of what they are doing.  This belief system provides the staying power in the fact of failures, setbacks, and unmerciful rejections.

PG.# 31 BANDURA
There is much talk about the validity of theories, but surprisingly little attention is devoted to their social utility.  For example, if aeronautical scientists developed principles of aerodynamics in wind tunnel tests but were unable to build an aircraft that could fly the value of their theorizing would be called into question.  Theories are predictive and operative tools.  In the final analysis, the evaluation of a scientific enterprise in the social sciences will rest heavily on its social utility.




Image Theory
LEE R. BEACH and TERENCE R. MITCHELL

 

PG.# 39 BEACH and MITCHELL
This state of affairs elicited two immediate responses from behavioral decision researchers.  The first was to declare decision makers flawed and to insist that they learn to behave as the normative models prescribed.  The impact of this response has been minimal; there is little or no evidence that training in decision theory or decision analytic methods makes one a better decision maker.  The second response was to modify normative theory, usually by retaining the general maximization of expected value framework but adding psychological assumptions that make the theory more predictive of actual decision behavior.  Kahneman and Tversky's Prospect Theory (1979, Tversky and Kahneman, 1992) is the prime example of this response.  By taking into account various biases, the underlying logic of the normative model remained relatively unscathed.

Quite aside from whether observed decision making resembles a gambler behaving as normative theory prescribes, there are two very large logical problems with the gamble analogy itself.  The first is that the expected value of a gamble is the amount that the gambler can expect to win, on average, if he or she plays the gamble repeatedly.  However, it is not at all clear what expected value means for a single gamble; the gambler either wins or loses and the average is irrelevant.  Thus, the gamble analogy may hold if a decision maker makes a series of highly similar decisions, but it probably does not hold for unique decisions.  In fact, in laboratory studies, gamblers treat repeated and unique gambles very differently (Keren and Wagenaar, 1987).  Because decision makers regard the bulk of their decisions as unique, it seems unlikely they would treat many of them like gambles, which makes the analogy inappropriate.  A manager does not approach a major decision with the idea that he or she will get to do this repeatedly and all that matters is that he or she is successful, on average, over the long run.

The second problem with the gamble analogy is that real gamblers doe not get to influence the outcomes of gambles; they place their bet and await the turn of the card or the spin of the wheel.  In personal and organizational decision making, substantial time may elapse between the decision and its outcomes and most of us use this time to do our utmost to influence those outcomes.  We acknowledge that risk abounds, but we do not accept the passive role of a gambler who patiently waits to see if he won or lost. This is why probabilities make so little sense to most people --- they want to use probabilities to describe the overall riskiness of the decision task, but they do not want to attach probabilities to every attribute of each decision alternative. In fact, real world decision makers insist that they try to nullify risk (probability) by working hard to make sure that these things come out well.

PG.# 41 BEACH and MITCHELL Both of us had a history of working with Fred Fiedler.  Among the many contributions Fred has made to organizational theory, one of the most important is the concept of contingency theory.  A contingency theory assumes that behavior is contingent upon the characteristics of the person, the characteristics of the task, and the characteristics of the environment in which the person and the task are embedded.  The theoretical problem is to identify the components of each of these three classes of characteristics.  The empirical problem is to see how the components of these classes of characteristics influence the behavior of interest.

So, based on our introspections about our own decision strategies and on our familiarity with the relevant literature, we began to write a contingency theory of decision strategy selection.  We began with the idea that decision makers have repertories of strategies that range from aided analytic strategies, such as decision matrices and decision trees based on SEU, which usually require the help of a computer and/or a decision analyst: to unaided analytic strategies, such as Simon's (1957) "satisficing rule"; to simple nonanalytic strategies, such as a rule of thumb or asking a friend or consulting a fortune teller.  The expenditure of effort (and, sometimes, money) required to use these strategies decreases from aided analytic to nonanalytic.  Moreover, there are individual differences in the strategies decision makers have in their repertories.

The decision maker's characteristics are knowledge of strategies, ability to use them, and motivation.  The latter is characterized as wanting to expend the least effort compatible with the demands of the decision task, whose characteristics are unfamiliarity, ambiguity, complexity, and instability.  The decision maker and the task are embedded in a decision environment characterized by the irreversibility of the decision, significance, accountability for being correct, and time/money constraints.  The strategy selection mechanism is driven by the decision maker's motivation: Select a strategy by balancing the effort of using it against its potential for producing a desirable outcome.

PG.# 42, 43and 44 BEACH and MITCHELL In light of our thinking about these three troubling issues, and in light of our doubts about the generality of the Strategy Selection Model, we actively tried to make ourselves think outside the accepted canon and lore about decision making.  With the help of Kenneth Rediker, who was a graduate student at the time, we held weekly think-sessions in which we chased ideas.  Slowly, we began to see a structure to what we were thinking, and we began to write small essays trying to pin down our ideas.  These essays eventuated in our first attempt to go public (Mitchell, Rediker, and Beach, 1986).

After that first publication things got tough.  American journal reviewers seemed particularly reluctant to publish our work, even the empirical studies.  We did much better in Europe (e.g., Beach and Mitchell, 1987; Beach, et al., 1988; Beach and Strom, 1989).  To get the word out, we decided to put our ideas, and our research, in a book, but no American publisher was interested.  Finally, Britain's Wiley Ltd. took the risk, publishing Image Theory: Decision Making in Personal and Organizational Contexts in 1990.  Although we do not believe many people read the book, its mere existence seemed to give the theory legitimacy and interest quickly grew.

3.2 IMAGE THEORY BRIEFLY In the Image Theory view, the decision maker is an individual acting alone.  Of course, most decisions are made in concert with others, be it a spouse, a friend, business colleagues, or whoever.  But, even so, the decision maker has to make up his or her own mind and then differences of opinion must be resolved in some manner that depends upon the dynamic of the group.  That is, Image Theory does not regard groups or organizations as capable of making decisions per se, they are the contexts within which individual members' decisions become consolidated through convincing others, negotiation and politics to form a group product (Beach, 1990; Beach and Mitchell, 1990; Davis, 1992).  As a result, Image Theory focuses on the individual making up his or her own mind on the context3 of a social relationship or an organization, with the presumption that the result may later prevail, be changed, or be overruled when presented to others.

Each decision maker is seen as possessing values that define for him or her how things should be and how people ought to behave.  This involves such old-fashioned concepts as honor, morals, ethics, and ideals as well as standards of equality, justice, loyalty, stewardship, truth, beauty, and goodness, together with moral, civic, and religious precepts and responsibilities.  Collectively these are called principles and they are "self-evident truths" about what the decision maker (or the group to which he or she belongs) stands for.  They help determine the goals that are worthy of pursuit, and what are and what are not acceptable ways of pursuing those goals.  Often these principles cannot be readily articulated, but they are powerful influences on decisions.



3    The social or organizational context includes knowledge about others' views, information about the issue requiring a decision and the values and meanings (culture) shared by members of the relationship or organization (Beach, 1993).



Pg.# 46 and 47 BEACH and MITCHELL To state all of this a bit more formally: Decision makers use their store of knowledge to set standards that guide decisions about goals to pursue and strategies for pursuing them.  Potential goals and plans that are incompatible with (violate) the standards are quickly screened out and the best of the survivors is chosen.  Subsequent implementation of the choice is monitored for progress toward goal achievement; lack of acceptable progress results in replacement or revision of the plan or adoption of a new goal.

Each decision maker possesses a store of knowledge that is far greater than needed for the decision at hand.  That store can be conveniently partitioned into three categories, which are called images (Boulding, 1956; Miller, Galanter, and Pribram, 1960) because they are the decision maker's vision of what constitutes a desirable and proper state of events.  The categories are labeled the value image (principles), the trajectory image (the agenda of goals), and the strategic image (the plans that are being implemented to achieve the goals).

The constituents of the three images can be further partitioned into those that are relevant to the decision at hand and those that are not.  The relevant constituents define the decision's frame, which gives meaning to the context and provides the standards that constrain the decision.

There are two kinds of decisions, adoption decisions and progress decisions.  Adoption decisions are about adding new principles, goals, or plans to the respective images.  Progress decisions are about whether plan implementation is producing progress toward goal achievement.

There are two decision mechanisms, the compatibility test and the profitability test.  The compatibility test screens candidate principles, goals, or plans on the basis of their quality.  Actually, the focus is on lack of quality in that the candidate's compatibility decreases as a function of the weighted sum of the number of its violations of the relevant standards from the various images, where the weight reflects the importance of each violated standard.  If a single candidate survives screening by the compatibility test, it is adopted as a constituent of its respective image.  If there are multiple candidates and only one survives, it is adopted.  If there are multiple candidates and more than one survives, the tie is broken by application of the profitability test.  The profitability test focuses on quantity--choose the best candidate.  The Christensen-Szalanski formalization of the Strategy Selection Model has been incorporated into Image Theory to account for the many ways in which the candidates in the choice set can be evaluated and the best of them chosen.

PG. #51 BEACH and MITCHELL In short, we tailored our research emphases to colleagues in two different, but related, disciplines.  Decision researchers like equations and numbers, human resource researchers like interesting concepts.  By coaching our work in the appropriate terms we were able to arouse the interest of people in both disciplines.  Of course, it would be nice if the world would beat a path to your door after you invent a better theory, but it really does not happen that way.  There is a marketplace of ideas and marketing is as much a part of that marketplace as any other.  Our research strategy was designed to address this marketing problem, and it has worked well enough, in that other people have taken up the cause and extended the theory in ways we could never have imagined.  Moreover, this acceptance means we can now move on to examine a broader array of features of the theory.




The Road to Fairness and Beyond--ROBERT FOLGER

PG.# 56 FOLGER The turning point in events that led to my dissertation's themes, however, came indirectly.  At some point while perusing the social-comparison literature, I read the Adams (1965) chapter on inequity.  Here was something I could sink my teeth into!  Unlike ideas that seemed to go several directions at once, the Adams material had a focus that seemed promising.  Also, I saw significant "holes" in the research.  For one thing, Adam's own research stream had concentrated almost exclusively on the counterintuitive aspects of advantageous inequity ("overpay"), whereas I found the relative deprivation of disadvantageous inequity more interesting.  I also thought the lack of systematic investigations into the latter left a large number of questions unanswered.  Moreover, the Adams framework seemed well formulated in ways that would make useful operationalizations of the relevant constructs reasonably straightforward.  The more I read, the more convinced I became that predictions about reactions to underpayment were problematic because of these unanswered questions.  A series of early studies by Karl Weick (e.g., 1966) only confirmed this impression.

PG# 67, 68, & 69 FOLGER At the time, the mainstream journals reacted negatively to the presentation of results from those surveys in terms of procedural justice because the items referred not to choice or voice but to the demeanor and conduct of the police.  Having been influenced by Leventhal's (1980) approach to procedural variables, however, Tom conceived of procedures more inclusively.  Hindsight indicates we had addressed what Bob Bies later termed interactional justice (e.g., treating people with dignity and respect), but his writings on that topic had not yet appeared in print.

Bob became the next source for my recognizing the incompleteness of outcome-dominated thinking because of the frequency with which people care as much or more "how" things transpire as they do "what" they receive as tangible benefits.  The evolution of my thinking did not move in a linear fashion; various side-ways investigations also occurred (e.g., Folger and Konovksy, 1989; Folger, Konovsky, and Cropanzano, 1992).  I only realized gradually that traditionally conceived "outcomes" (e.g., pay amounts) often fail to have the psychic and symbolic impact of impersonal misconduct that demeans (e.g., publicly insulting subordinates in front of their peers).

Work by Bies influenced me in several ways.  His notion of interactional justice had a lasting impact not only on me but also on organizational science.  He also stressed social accounts, however, in ways that linger at least as much in my case.  Here, I saw that my RCT manipulations of "procedural" factors (e.g., Folger, Rosenfield, and Robinson, 1982; Folger and Martin, 1986) did not actually manipulate the structural aspects of procedures but instead applied social accounts to influence the participants' perceptions of procedures.  Bob's, having made that explicit, let to a follow-up study (Cropanzano and Folger, 1989) showing that the effects of both accounts and structural elements nonetheless paralleled one another.  Bies also reinforced my thinking that notions regarding legitimacy stretched beyond the structural design features of formal procedures per se--the very intuition that had guided me in using justification as the key non-outcome element in RCT rather than procedures or procedural justice.  In addition, I saw this beyond-structure impact as coming from social conduct, such as choices of how, when, and what to communicate (the accounts emphasis) but also including a range of interpersonal behaviors whether explicitly linked with communication efforts or not (such as giving someone the "cold shoulder," deliberately ignoring someone or taking pains to have nothing to do with them; e.g., Folger, 1993).

Having given an historical background on RCT, I turn now to Fairness Theory as an outgrowth from that line of thought.

4.3 FAIRNESS THEORY Fairness Theory or FT (e.g., Folger and Cropanzano, 1998, 2001; Folger, Cropanzano, and Goldman, forthcoming), herein reflects as yet unpublished developments in that model.  It stresses the theme of accountability impressions (not necessarily from conscious, deliberative thought--at least for some instances of initial reactions to events and persons) in relation to counterfactuals.  Accountability regarding blameworthiness can, in principle, reflect a continuum but in practice tends towards such poles as innocence versus guilt, blame versus credit, merit versus demerit.  FT posits that the motives and intentions presumed to underlie a person's mode of conduct can influence impressions about unfairness when the person seems at fault for wrongdoing.

The relevant counterfactuals--Would, Could, and Should--align roughly with elements from Schlenker's (e.g., 1997) triangle model of moral accountability as three interlocked components.  FT treats unfairness (holding someone accountable and blameworthy) as derived from a conjunction among these three facets relevant to impressions about human conduct.  Blame for unfairness amounts to a negative impression concerning each facet: What actually happened appears detrimental vis-à-vis three counterfactual representations (what did not happen) that each, in some sense, seem positive by comparison.

Pain contrasts negatively with pleasure as its (implicit) counterfactual, for example, just as guilt contrasts negatively with innocence.  Perceived unfairness metaphorically mirrors the "pain" associated with a perceiver's impressions about an incident (e.g., one person scathingly belittles another) that Would NOT have generated concern "if only" the incident had never taken place.  Blame also constitutes a negative (e.g., disapproving) impression related to at-least implicitly activated counterfactual representations concerning how the blamed person did not behave but Could and Should have behaved.

An example of an employee treating a customer in a rudely unfair manner (adapted from McColl-Kennedy and Sparks, 2003) illustrates these abstractions.  The rudely treated customer perceives unfairness with regard to the following conjunction of counterfactual standards or referents: "what could have occurred (being served with a smile), what should have occurred (being treated politely), and how it would have felt had an alternative action been taken (feeling happier)" (McColl-Kennedy and Sparks, 2003, 254).  Similarly, a third-party observer might consider the rudeness unfair and blame the employee for it if that perceiver's impressions include the sense that (a) the employee Could have smiled (e.g., did not have his or her mouth wired shut), (b) the employee Should have had more respect for the customer (e.g., by virtue of service-employees' duly assign responsibilities and obligations toward customers in general), and (c) the situation Would not have aroused any concern on the observer's part in the absence of the kind of incident that occurred.

PG.# 81 FOLGER Adams, J. S. (1965). Inequity on social exchange.  In I. Berkowitz (ed.), Advances in Experimental Social Psychology: 267-299.  New York: Academic Press.




Grand Theories and Mid-Range Theories: Cultural Effects on Theorizing and
the Attempt to Understand Active Approaches to Work
MICHAEL FRESE

 

PG.# 84 & 85 FRESE As is true of all human behavior, theory building is based on environmental forces and on person factors.  It has been my curse and my blessing to be overactive.  My overactive nature led me to believe that it was good to be active and to be in control of things.  Therefore, I quickly embraced theories that seemed to correspond with this prejudice.  The three theories that seemed to encompass what I stood for were Rotter's cognitive behaviorist theory (Rotter, Chance, and Phares, 1972), Seligman's learned helplessness theory (Seligman, 1975), and Hacker's action (regulation) theory (Hacker, 1973; Volpert, 1974).  Both Rotter's as well as Hacker's theories were indirectly related to a common source: Lewin's influence in Germany and in the U.S.  All my research centered around the themes of an active approach to work-life (the opposite of helplessness): Thus, I became interested in personal initiative as one such instance of an active approach.  Since an active approach means to explore, I also became interested in errors and how one can learn from errors.

PG.# 87 & 88 FRESE Our phenomenological orientation towards errors allowed us to make a discovery: When people are permitted and encouraged to make errors during training and are instructed to learn from errors, they perform better after training than when they are hindered from making errors.  This was surprising because most software trainers and a lot of theorists (e.g., Skinner and Bandura) had suggested otherwise: They favored the avoidance errors because they considered errors were too frustrating and inefficient for the learner, and that people would simply learn the wrong things.  Our so-called error training (later called error management training) proved to be superior to other methods of training people in computer skills (Heimbeck, et al., 2003; Ivancic and Hesketh, 2000; Keith and Frese, forthcoming).

Action theory argues that negative feedback is useful (Miller, Galanter, and Pribram, 1960): Action implies a goal (some set point that needs to be achieved).  Until one has achieved the goal, a person receives information that there is a discrepancy between the present situation and the set point (achievement of the goal, e.g., a person wants to travel to Rome and acknowledges that he or she is 500 miles away).  Thus, negative feedback presents information on what we have not yet achieved and it provides guidance to action.  Errors provide negative feedback but with a specific twist: An error implies that the actor should have known better.  It is the latter that produces the problems of blaming people--both oneself and others.

Therefore, we developed a training procedure (error management training) which gave participants explicit instructions to use errors as a learning device and not to blame themselves.  Participants are supposed to explore a system with minimal information provided; in contrast to exploratory training, error management training tasks are difficult right from the start, thereby exposing participants to many errors.  Error management training explicitly informs the participants of the positive function of errors; these so-called error management instructions are brief statements (we often called them heuristics because they allow us to deal with the error problem) designed to reduce potential frustrations in the face of errors: "Errors are a natural part of the learning processes!"  "I have made an error, great!  Because now I can learn!"  While participants work on the training tasks, the trainer provides no further assistance but reminds the participants of the error management instructions.  When comparing error management training with a training procedure that does not allow errors, error management training proved to be more effective across diverse groups of participants (university students as well as employees), training contents (e.g., computer training, driving simulator training), and training durations (1-hour training to 3-day training sessions), with medium to large effect sizes (Frese 1995).

PG. #103 FRESE What have I learned from my journey as a scientist who contributed both to a grand theory as well as to middle range theories?  The most important issue seems to me to have an open mind to the quirks and difficulties, as well as to beautiful coping strategies that people show in their environment--I think that curiosity and being able to wonder and be surprised are the hallmark of good science.  I am very interested in concrete phenomena and I suggest one should become intensively involved in real life phenomena (these may also be laboratory phenomena but I, personally, have been more interested in those that constitute important issues in society--not necessarily in my own society).  It helps to cultivate contacts across cultures and maintain contacts with various strata in society--varied experiences support the process of being surprised, stumbling across interesting phenomena, and of developing a wider net of theoretical ideas and methodological approaches.

Good research questions often start with wonderment and surprise.  We then have to work on understanding experiences and phenomena theoretically.  For this it is helpful to look at the world like a theory machine that attempts to understand all sorts of phenomena with theoretical concepts.  I remember that as students and young researchers my friends and I used to apply theories like a 2-year-old takes a hammer: We continuously attempted to use it to explain every phenomenon possible--in this way we quickly stumbled across the limits of the usefulness of these theories and, at the same time, we started to understand the theories better.

PG. # 104 & 105 FRESE In terms of methodology, I have come to rely more and more on a combination of qualitative and quantitative approaches.  I use structured interviews because differential anchor points are particularly problematic in any questionnaire research: What is high planning for one owner may be complete chaos for another one.  Structured interviews are useful, not only because they showed excellent validity in meta-analytic research (Hunter and Schmidt, 1996), but also because interviews gave me a chance to probe owners' answers and to understand precisely what they mean.  Questionnaires sometimes "lead" participants to certain answers.  For example, it would have been "leading" to ask directly for planning and activity within the questionnaire survey.  This is particularly true for cultural contexts in which it is improper to contradict others and where there is a tendency to create harmony (as in Africa).  All of this speaks for interviews.  At the same time, I want to have quantitative data to test hypotheses and to confirm and falsify them--therefore it is necessary to use coding procedures (I use very robust ones--not complicated content analyses).

I should warn you, however.  Not all of this writing immediately gets translated into academic success.  As a matter of fact, it is my observation that some of the empirical articles that I am most proud of (probably because they are dearest to my theoretical approach), have been the most difficult to publish.  My hunch is that they break with the typical approach to doing things and, therefore, invite criticism that reviewers are only too glad to provide.  On the other hand, those articles, that I am most proud of, are also often the ones that have the highest impact.  And that is after all what we are interested in.  We should never want to publish something just because we need another publication (well...at least never after we get tenure...).  I usually was driven to work hard on publications by the fact that I wanted to communicate something that I found to be important.  We should all want to shape and influence the development of science and knowledge rather than just be a smoothly functioning particle of a scientific machine.

 




Upper Echelons Theory: Origins, Twists and Turns, and Lessons Learned
DONALD C. HAMBRICK

PG.# 109 HAMBRICK The central idea of upper echelons theory is that executives act on the basis of their highly personalized interpretations of the situations and options they face.  That is, executives inject a great deal of themselves--their experiences, personalities, and values--into their behaviors.  To the extent those behaviors are of consequence, say in shaping strategy or influencing the actions of others, organizations then become reflections of their top managers.

PG.# 120 & 121 HAMBRICK While doing field research in the early 1990s, interviewing CEOs about their top management teams (TMTs), an unsettling fact become clear: Many, many top management teams have few "team" properties.  They consist primarily of solo operators who are largely allowed to run their own shows, who interact minimally, sometimes rarely seeing each other.  Such a condition poses a problem for upper echelons theory, or at least for that aspect that deals with how TMT characteristics affect firm outcomes.  For, if TMTs are highly fragmented, then team characteristics will matter very little to firm outcomes.  Instead, firm outcomes are the outgrowth of a host of narrow, specialized choices made by various individual executives (Hambrick, 1994).

These observations lead me to develop and elaborate on the concept of "behavioral integration" within TMTs.  Behavioral integration is the degree to which mutual and collective interaction exists within a group, and it has three main elements or manifestations: information exchange, collaborative behavior, and joint decision making.  That is, a behaviorally integrated TMT shares information, shares resources, and shares decisions.  In its focus on substantive interaction, behavioral integration is related to, but distinct from, "social integration," a concept that places more emphasis on members' sense of group pride or team spirit (Shaw, 1981).

In my initial presentation of behavioral integration, I proposed an array of factors that will determine the degree of behavioral integration that will exist in a given TMT.  These factors included environmental factors, organizational factors, and the CEO's own personality or performance.  Recently, Simsek, et al. (forthcoming) collected data on TMTs in 402 small- and mid-sized companies, verifying some of the key predictors of TMT behavioral integration.  In particular, they found that behavioral integration was positively related to the CEO's own collectivist orientation and tenure, and negatively related to TMT size and several types of TMT diversity.

PG.# 122 & 123 HAMBRICK Even though upper echelons theory has made its mark on the organizational sciences, I have some lingering disappointments about our shortcomings in testing and verifying the theory.  Foremost, I am disappointed that we have not done a better job of directly examining the psychological and social processes that stand between executive characteristics on the one hand and executive behavior on the other.  Namely, we have done a poor job of getting inside "the black box" (Lawrence, 1997; Markoczy, 1997).  For example, when we observe that long-tenured executives engage in strategic persistence, why is that?  Are they committed to the status quo?  Risk-averse?  Tired?  or What?  Even examination of executive psychological properties is not exempt from such questions.  So, for example, when we find that executives who have a high tolerance for ambiguity perform well when they pursue growth-oriented strategies (as opposed to harvest-oriented strategies) (Gupta and Govindarajan, 1984), why is that?  What's going on?  How does tolerance for ambiguity affect executive behaviors?  Even though we have talked for a long time about the need to get inside the black box (to the point that it has become a cliché to express the need), we still have made exceedingly little progress in doing so.

In this same vein, we have little evidence that executives filter the information they confront in any way that resembles the three-stage process depicted here as Figure 6.1.  For example, do executives with technology backgrounds scan more technology-oriented information sources than those who don't have technology backgrounds?  Do they notice, or perceive, more of the technology information they scan?  Do they require fewer pieces of information to form an opinion about a technology trend?  In short, there is a pressing need to gather data on the actual information-processing behaviors of individuals (and teams) in strategic decision-making situations.  Pursuing this perspective will certainly require laboratory-type or experimental research designs, as well as the tools and concepts of the psychologist.

A related disappointment is that we have done an inadequate job of disentangling causality in upper echelons studies.  Do executives make strategic choices that follow from their own experiences, personalities, and biases, as posited by the theory?  Or do certain organizational characteristics lead to certain kinds of executive profiles?  Over time, a reinforcing spiral probably occurs: managers select strategies that follow from their beliefs and preferences; successors are then selected according to how well their qualities suit that strategy; and so on.  Thus far, relatively few upper echelons studies have been designed in ways as to allow convincing conclusions about casual direction.




Goal Setting Theory: Theory Building by Induction
EDWIN A. LOCKE and GARY P. LATHAM

 

PG.# 128 LOCKE and LATHAM Life is a process of goal-directed action.  This applies both to the vegetative level (e.g., one's internal organs) and to the level of purposeful choice (Locke and Latham, 1990).  The conscious mind is the active part of one's psychology; one has the power to volitionally focus one's mind at the conceptual level (Binswanger, 1991; Peikoff, 1991).  Volition gives one the power to consciously regulate one's thinking and thereby one's actions.  Goal setting theory (Locke and Latham, 1990, 2002) rests on the premise that goal-directedness is an essential attribute of human action and that conscious self-regulation of action, though volitional, is the norm.

We do not deny the existence of the subconscious nor its power to affect action.  In fact, the subconscious is essential to survival in that only about seven separate elements can be held in focus awareness at the same time.  The subconscious operates automatically and serves to store knowledge and skills which are needed in everyday action.  The subconscious is routinely activated by our conscious purposes and also determines our emotional responses (Locke, 1976).

PG.# 137 & 138 LOCKE and LATHAM We recognized early on, again by introspection, that goal commitment is critical to goal effectiveness.  We, like everyone else, knew that most New Year's resolutions are abandoned.  Lofty sounding intentions do not necessarily indicate commitment to specific goals.

The question was: How do you get goal commitment?  Our initial belief was: through participation.  Participation in decision making (pdm) was a popular topic of study following World War II.  Locke (1968) predicted that participation would enhance goal commitment.  We did not pursue this matter for some time; then, starting in the 1970s, there was chaos in the literature on this topic.  The reason was largely political (Wagner and Gooding, 1987).  For many scholars participation was viewed not only as a potentially useful managerial technique, but as a "moral imperative."  Because it was considered a "democratic" practice and an antidote to fascism, the results simply had to be supportive of the former ideology.

Locke and Schweiger (1979) conducted a literature review.  They discovered that the interpretation of many pdm studies had been distorted to make the results appear supportive.  When the data were interpreted objectively, pdm only had a minimal effect on performance.  Strongly worded arguments on this issue went back and forth in the literature for years; heated debates took place at professional meetings.

Latham and I, however, stuck to our core principle: "reality (facts) first."  We had no "moral" bias either for or against pdm.  As noted, we both initially expected pdm to lead to higher goal commitment, because the positive effects of pdm had been touted so much in the earlier literature.

The thrill of inductive, programmatic research is akin to that of being a detective.  Latham's doctoral dissertation involving logging crews revealed that productivity was highest in those who were randomly assigned to the participatively set goal condition and less educated (Latham and Yukl, 1975).  This supported the value of pdm--but there was a confound.  It turned outh that goal difficulty was also significantly higher in that condition.  The same result was obtained in a field experiment (Latham, Mitchell and Dossett, 1978).  Then a series of laboratory experiments showed that when goal difficulty was held constant, participation in goal setting had no effect on goal commitment or performance. (Latham and Marshall, 1982; Latham and Saari, 1979a; Latham and Steele, 1983).  All this seemed to indicate that the initial pdm effects had simply been goal effects.  The issue of pdm was momentarily settled.

Soon, however, a series of studies by Miriam Erez and her colleagues appeared (e.g., Earley and Kanfer, 1985; Erez, Earley and Hulin, 1985).  The results of this work can be summarized in a single sentence: Latham is wrong; participatively set goals work better than assigned goals.  Instead of attacking Erez, Latham posed the question: why the differences?

When competent researchers obtain contradictory findings, the explanation may lie in differences in methodology.  We decided to resolve the conflict in a revolutionary way.  Latham and Erez would design experiments with Locke, who was a close and respected friend of both parties, agreeing to serve as a helper and a mediator between us.  The result was a series of experiments jointly designed by the three of us.

PG.# 139 LOCKE and LATHAM Atkinson (1958), a student of McClelland, predicted that one's motivation is highest when task (goals were not part of his model) difficulty is.50.  This suggested a possible curvilinear (inverted-U) relationship between goal difficulty and performance.

In contrast, expectancy theory (Vroom, 1964) states that the force to act is a multiplicative function of valence, instrumentality, and effort-performance expectancy.  Holding the first two factors constant, the theory predicts a positive, linear association between expectancy and performance.  However, difficult goals are harder to attain than easy goals, thus we had found a negative linear relationship between expectancy of success (high expectancy meant easy goals) and performance (Locke, 1968).

All three theories could not be correct.  Aided by an insight by Howard Garland, Locke, Motowidlo, and Bobko (1986) resolved the puzzle.  When goal level is held constant, that is, within any given goal group, the positive linear relationship asserted by expectancy theory is correct.  Between groups, when goal level is varied, the relationship is negative.  This does not contradict expectancy theory, because expectancy theory assumes that the referent is fixed.  When Bandura's self-efficacy measure is used (which averages a person's confidence estimates across multiple performance outcome levels) both the within and between group associations are positive.  The curvilinear relationship between expectancy, or goal difficulty, and performance as suggested by Atkinson replicates only when there are a substantial number of people in the hard goal condition who reject their goals (Erez and Zidon, 1984; Locke and Latham, 1990).

PG.# 143 & 144 LOCKE and LATHAM Our approach to theory building effort is inductive.  Induction means going from the particular to the general.  This is in contrast to the "hypothetico-deductive" method.  The latter view stems from a long line of philosophical skeptics, from Hume to Kant to Popper to Kuhn.  The core premise of this view is that knowledge of reality is impossible.  Popper, believed that because theories are not based on observations of reality, they can start, arbitrarily, from anywhere.  Thus, theories cannot be proven, they can only be falsified by testing deductions from them.  Even falsification, Popper asserted, never gets at truth.  Induction is rejected.  If Popper were correct, scientific discovery would be impossible.  But history refutes this view.

The history of science is the history of discoveries made by observations of reality, and integrated into laws and principles.  Subsequent discoveries do not necessarily invalidate previous ones, unless errors of observation or context-dropping were made.  They simply add to knowledge.  Mankind did not get from the swamps to the stars by eschewing the search for knowledge and seeking only to disprove arbitrary hypotheses.

Galileo, for example, did numerous experiments with freely falling objects, objects rolling down inclined planes, swinging pendulums, and trajectories of objects and induced the law of inertia, the constancy of gravity, and the laws governing horizontal and vertical motion.  He also invented an improved telescope and discovered four moons of Jupiter.  He proved that Venus orbits the sun--giving further credence to Copernicus's heliocentric theory.  Newton discovered that white light is composed of different colors by doing experiments with prisms.  He drew upon the observations of Kepler and Galileo to discover the laws of motion.  Especially revolutionary was the idea that all bodies are attracted to one another by a force (gravity) whose magnitude is proportional to the masses of the bodies, and inversely proportional to the square of the distance separating them.  With this knowledge, including his invention of calculus, he was able to explain the actions not only of the planets but of the tides.  Both Galileo and Newton used observation to gather data, conduct experiments, and then integrated their observations into a theory.

Einstein agreed: "Turning to the subject of the theory of [special] relativity, I want to emphasize that this theory has no speculative origin, it rather owes its discovery only to the desire to adapt theoretical physics to observable facts as closely as possible" (Einstein, 2002: 238).

Contrast Galileo, Newton, and Einstein to Descartes who argued that one can deduce the components of matter, the nature of the planets, moons, and comets, the cause of movement, the formation of the solar system, the nature of light and of sunspots, the formation of the stars, the explanation of tides and earthquakes, the formation of mountains, magnetism, the nature of static electricity and chemical interactions--all from what he claimed were innate ideas discovered intuitively.  Not surprisingly, every single one of his theories was wrong.2

Of course, theory building does include deduction.  But, the major premises that form the beginning of any syllogism (e.g., "all men are mortal") have to be established by induction, or else the conclusion, even if valid in "form," will be false.

What then does induction involve?


2    The comments about Galileo, Newton, and Descartes were based on portions of a forthcoming book by David Harriman.  These portions were published in The Intellectual Activist, vol. 14, nos. 3-5 (2000) and vol. 16, no. 11 (2002).  The authors are indebted also to Stephen Speicher for providing the information on Einstein.




Do Employee Attitudes towards Organizations Matter?
The Study of Employee Commitment to Organizations
LYMAN W. PORTER, RICHARD M. STEERS and RICHARD T. MOWDAY

PG. #171 & 172 PORTER, STEERS and MOWDAY The late 1960s and early 1970s in the United States were both turbulent and tranquil.  Campus unrest over the war in Vietnam and civil rights wreaked havoc across many college campuses as students, and sometimes faculty, picketed, struck, and otherwise protested situations that they thought were both unjust and unfair.  Occasionally, the demonstrations turned violent and at its peak collective action was sufficiently strong and vocal to bring down a sitting U.S. president.  At the same time, however, most major U.S. companies remained bastions of relative tranquility as blue- and white-collar employees went to work every day and worked hard for a better life.  The "organization man" described in William H. Whyte's classic 1952 book of the same name was alive and well.  Managers (almost exclusively male) wore business suits and downsizing was not yet on the corporate radar screen.  People typically worked for one company throughout their career and retired at age 65.  While college campuses may have been in crisis, everything was "normal" in Corporate America.

These two contrasting pictures, one of strife and turmoil and one of stability and tranquility, puzzled many social observers of the time.  Exactly what was going on here?  Why were some employees, be they university professors or corporate managers, highly committed to their organizations, while others were indifferent or even antagonistic?  What caused some employees to form emotional bonds and strategic attachments to their organizations, while others quit as soon as they had a chance?  And throughout it all, how could organizations entice their best employees to remain with them for the duration?  These issues intrigued social scientists of the time because they forced organizations--and to some extent societies--to grapple with fundamental questions about the legitimate role of employees in work organizations.  Scholars began asking questions about the nature of employee commitment to organizations, as well as how commitment developed or failed to develop over time.  How did employers and employees define their mutual dependencies and how did they negotiate and implement psychological contracts?  From a research standpoint, the search was on for what became known as the causes and consequences of organizational commitment.

PG.# 185 & 186 PORTER, STEERS and MOWDAY The weight of the evidence suggests that employee attitudes toward the organization are behaviorally relevant.  However, the magnitude of these relationships reported in the literature suggests that organizational commitment, while important, is obviously not the only attitude that influences behaviors in the workplace.  Rather, the determinants of employee behavior in the workplace are complex and involve attitudes toward multiple features of work (e.g., the job and the organization), behavioral intentions, and contextual factors that facilitate or inhibit employees from acting on their intentions.  Given that this line of research on organizational commitment was motivated to redress the imbalance in research on job satisfaction and other job-focused attitudes that existed at the time, it seems reasonable to conclude that subsequent research has demonstrated that a broader array of attitudes are important to understanding behavior at work.

Even so, the world of work has changed dramatically since our initial research on organizational commitment in the 1970s and 1980s.  Downsizing and minimum wage jobs have become a strategy of choice for many firms in order to meet intense competitive pressures, while employees who retain their jobs are under increasing pressure for increased productivity and efficiency.  Working hours, including both voluntary and involuntary overtime, as well as stress levels, are on the increase.  Increased globalization pressures have led to a marked expansion of overseas manufacturing and outsourcing, even among white-collar and professional employees.  Meanwhile, younger employees of both genders are becoming increasingly vocal about securing a suitable work-family balance just at the time when such a balance may be the more difficult to achieve.  Above all, gone are the days when most young high school and college graduates sought a career and a company for the long term.

In this regard, Peter Capelli (1999: ix) has noted that "[T]he older, internalized employment practices, with their long-term commitments and assumptions, buffered the employment relationship from market pressures, but they are giving way to a negotiated relationship where the power shifts back and forth from employer to employee based on conditions in the labor market."  Even so, Capelli acknowledges that most contemporary firms still require some form of employee commitment to meet their goals.  To accomplish this, he observes that many companies have tried to refocus employee commitment away from the company as a whole and towards specific aspects of the company, such as work teams.  "For many jobs, commitment to the corporation as a whole is largely irrelevant as long as the employees feel commitment to their team or project" (p. 11).  At the same time, he points out that in recent years "voluntary turnover has been less of a problem for the corporate world because virtually all corporations have been downsizing at the same time, creating a big surplus of talent on the market and also restricting those who quit voluntarily" (p. 6).




Developing Psychological Contract Theory
DENISE M. ROUSSEAU

 

PG.# 191 ROUSSEAU Valery (1938, 1958) said, "There is no theory that is not a fragment, carefully prepared of some autobiography."  In my case, family background is as powerful as my academic training in laying the ground work for investigating the dynamics of employment relations.  My father hated his job.  He probably should have been a high school history teacher or basketball coach.  Instead of going to college or pursuing work that interested him, with a large number of brothers and sisters to support, and after serving in the U.S. Navy during World War II, he went to work for the telephone company first as a lineman and then a cable splicer, ultimately working there for thirty-six years.  Though the work was physically somewhat hard, it was the political and abusive behavior from telephone company foremen and managers that my father talked about at dinner.  (Later as an adult, I did some genealogical research and found out that during the late 1880s my French-Canadian great-grandfather had been a telephone company supervisor.  Dad was aghast).  My father's dissatisfaction with his job and career led me to focus on the work lives of workers, and especially of employees, those who work for somebody aside from themselves.  In hindsight, it seems natural that I became an industrial psychologist.

PG. #193 & 194 ROUSSEAU Psychological contract comprises the beliefs an individual holds regarding an exchange agreement to which he or she is party, typically between and individual and an employer (Rousseau, 1995).  These beliefs are largely bases upon promises implied or explicit, which over time take the form of a relatively stable mental model or schema.  A major feature of a psychological contract is the individual's belief that an agreement exists that is mutual; in effect, his or her beliefs in the existence of a common understanding with another that binds each party to a particular course of action.  Since individuals rely upon their understanding of this agreement in the subsequent choices and efforts they take, they anticipate benefits from fulfilled commitments and incur losses if another fails to live up to theirs, whatever the individual interprets another's commitments to be.

Psychological contract theory is construct-driven.  The features of this construct, particularly its schematic nature, give rise to its dynamic properties.  These dynamics are central to its distinctive consequences, antecedents, and boundary conditions.  A central dimension of this construct is incompleteness, in that the full array of obligations associated with the exchange are typically not known or knowable at an exchange relationship's outset; requiring the contract to be fleshed out over time.  Incomplete contracts are completed, updated, and revised through processes that affect both the degree of actual agreement between the exchange parties as well as the psychological contract's flexibility in the face of change (cf. Rousseau, 2001).  As a form of schema or mental model, psychological contracts become more durable as they move toward a high level of completeness, wherein they enable prediction of future actions by contract parties and effectively guide individual action..  This durability also poses difficulty in response to changing circumstances.  Sources of information used in developing and completing the psychological contract include the agents of the firm (e.g., managers and human resource representatives) as well as the social influence of peers and mentors, along with administrative signals (e.g., human resource practices) and structural cures (e.g., informal network position) to which individuals are exposed (Rousseau, 1995; Dabos and Rousseau, 2004b).

PG. #209 & 210 ROUSSEAU I worry a bit about generalizing too much from my own experience.  This account plays up my personal and professional circumstances and the fact that I have focused a long time on the same research domain.  Not every interesting problem is anchored in a scholar's life history.  The problem can be created by need, opportunity, or circumstance.  I also doubt that a good theory requires a single dominant theme in one's research over time.  Monad and Jacob managed to discover how gene functioning could be switched on and off and win a Nobel Prize, without having any apparent personal angle to the problem, and each went on to study a variety of other things.  The best advice implicit in my experience is to experiment with ways of working that help you learn and seek out others to help and learn with you.  Here are some ways of working that I found useful.

Figuring out the right question to ask has to be the hardest part.  A good question can guide discovery because even if the answer proves it wrong, you move forward (Wilson, 1999).  The question "Do people think in psychological contract-like ways?" arose from talking with Max Bazerman.  Formulating that question was important since it had the possibility of disconfirmation, and the potential to establish convergent and discriminant validity.  Good questions also call attention to mediating processes that underlie casual relationships.  It is not enough to know that something is related to something else.  Why and how are what matter.

Talking to smart people who think differently than we do helps in identifying important questions.  I was fairly systematic in meeting with colleagues at Northwestern, in the Business School, Psychology, Communications, and Law to see what suggestions they might have for exploring the notion of a psychological contract.  Being at a good research university with a diverse faculty is a great asset.  I used these conversations to get pointer knowledge about what to read and whom else to talk with.  I learned from their answers to the query, "What do you think would be a good question to ask about X (psychological contract, employment relationships, agreements between workers and employers, etc.)?  Trying to explain what I thought a psychological contract was and why it mattered invited informed and useful criticism, even if some of my colleagues might refer to it as the "so-called 'psychological contract'."  Talking with others made it easier to place the construct of a psychological contract into a theoretical framework.  The construct became clearer and more concrete to me while becoming more nuanced and differentiated from look-alike notions of expectations with which the field was already very familiar.  I also learned a lot from taking the theory on the road and doing colloquia.  (NB: This may work better if you aren't looking for a job.)




The Escalation of Commitment: Steps toward an Organizational Theory
BARRY M. STAW

PG. #233 - 235 STAW In advising young scholars interested in developing new theory, I would offer the following tips from escalation research:

First, I would consider the events of the world (from business, government, and politics) to be as rich a source of ideas as any academic literature.  One's own personal and family experiences can also be mined for interesting research ideas.  In the case of escalation, I was not only prompted to the research idea by observing the U.S. involvement in Vietnam, these observations took on particular meaning given my prior experiences with the military draft.  In addition, when I derived my specific hypotheses on self-justification, I drew on some vivid personal recollections.  Once, on vacation from college, my father asked me to look over some financial statements to see whether he should buy a particular retail store.  When I studied the numbers (with my recently acquired knowledge of introductory accounting) and pronounced the purchase to be a waste of money, my father drew a line through both the revenue and cost figures.  He said that the financial forecasts were far too conservative.  When I protested, he admitted that it really didn't matter since he had already purchased the store a few weeks earlier!  Experiences such as these can be an invaluable tool for constructing theory, since they have more depth and meaning than any perusal of the research literature.

Although I tout experience over literature reviews, I still think it is important to confront a potential research question with as broad a theoretical arsenal as possible.  Certainly my initial studies of escalation were shaped in large part by my earlier dissertation.  I also believe that the theory's transformation from a psychological to a more interdisciplinary model may have been aided by prior academic training.  At the time of my graduate work at Northwestern University, the doctoral program in organization research was almost entirely sociological.  Though I was greatly influenced by social psychologists such as Thomas D. Cook and Donald T. Campbell, most of my colleagues and faculty advisors were interested in macro or sociological questions.  Therefore, it was probably easier for me than other psychologists interested in escalation to make the transition from a largely individually-oriented theory to one that is also based on social and organizational forces.

A third piece of advice from work on escalation would be to approach research questions with as much methodological flexibility as possible.  As I have noted, my research started with a series of laboratory experiments designed to show that, under certain conditions, people may throw good money after bad.  Unfortunately, my own theoretical reasoning did not really broaden until I had worked on some case studies of escalation.  Only then did I realize that escalation was an interdisciplinary problem with multilevel forces at work.  As a result of this experience, I am now a firm believer in the power of grounded research, at least as a means of theory formulation.  Such investigations need not come in the form of publishable case studies.  They can also result from in-depth examinations of organizational events or from interviews with key actors in a social situation.  Regardless, grounded observations will likely enrich your hypotheses and broaden your understanding of an organizational phenomenon.

A fourth tip might concern the orchestration or sequencing of research studies.  When I entered academia, I had the naive impression that discoveries would be followed by press conferences and then a flurry of follow-up research.  Forget the press conferences and be satisfied with a few colleagues (and relatives) reading a paper.  Also forget the flurry of follow-up studies, if you are not willing to do them yourself.  Rarely does a single study ignite enough interest to start a genuine stream of research.  So be prepared to carry on alone for a while.  And, even when others have been brought into a line of research, do not expect them to pursue the issue in exactly the way you might prefer.  That is why I initiated case studies and archival research on escalation.  Without some intervention, I feared that escalation research might stagnate and eventually die in the laboratory.

My fifth and final tip relates to the process of theory formulation itself.  The field or organizational behavior is fond of summary models using a series of boxes and arrows.  I too have found them to be helpful devices in illustrating a theoretical process or set of mechanisms.  My complaint is that much of our field equates the graphical listing of variables with theoretical formulation.  Therefore, we need to be constantly reminded that the goal of theory is to answer the question of "why" (Kaplan, 1964; Merton, 1967).  Strong theory delves into the connections underlying a phenomenon.  It is a story about why acts and events occur, with a set of convincing and logically interconnected arguments (Sutton and Staw, 1995).  Hence, my advice for young scholars is to use diagrams as an aid to theoretical reasoning, but not as an end in itself.  With luck, your models will have implications that cannot be seen with the naked (or theoretically unassisted) eye, and may have implications that run counter to common sense.  If successful, the product may even satisfy Weick's (1989) dictum that good theory will explain, predict, and delight.




On the Origins of Expectancy Theory
VICTOR H. VROOM

PG. #251 & 252 VROOM Let me digress from a presentation of the formal derivations from the theory to a more personal topic.  Does the theory help me to describe or make sense of my own behavior surrounding its development?  How do I now make sense of my own choices using the expectancy theory framework?

It is now very clear that I was very highly motivated to complete Work and Motivation.  On many nights, I was still working in the university library when it closed at midnight, and I was asked to leave.  Developing a theory which made sense out of otherwise disparate findings was something that was "Hebb like," albeit in a totally different domain.  Furthermore, it represented a tangible effort at integrating the two disciplines of psychology, which Cronbach had advocated.  Finally, it united theory and application in a manner which might have received Kurt Lewin's blessing.  For these and probably many other reasons, writing Work and Motivation was something that I had to do.  At times, it felt like a labor of love and, at other times, a neurotic compulsion.  It was completely positively valent endeavor.

It was also clear that this strong desire was intrinsic and not based on a well-conceived career strategy.  My colleagues at Penn kept telling me that what I was doing was the province of those with tenure and that empirically based articles were a far safer course for those on a three-year contract.  If they were correct, I was jeopardizing my chances of getting promoted, at least at Penn by doing what I was doing.

Complementing this positive valence was a reasonable expectancy that I was capable of "pulling it off."  I have previously alluded to the many sources of support and encouragement I had received during my early academic years.  These served to sustain my belief that I was capable of the task at hand and made it possible to ignore the voices pointing to the peril that could lie ahead.  I also received support from my doctoral students at Penn who read the chapters as they were produced and made many helpful suggestions.  Prior to leaving Penn, I had met Gordon Ierardi, then editor of a highly prestigious Wiley psychology series.  Gordon asked to review my almost completed manuscript on Work and Motivation and subsequently extended a contract.

PG. #256 VROOM At least once a week, I receive an e-mail from a student somewhere around the globe asking for my current thoughts on expectancy theory.  The specific requests vary.  The student has been asked to write a paper or make a presentation on a theorist and has chosen me.  Would I explain the theory to them in simple terms or tell them how I came up with the theory or reveal some anecdote about my personal life that would add "punch" to their presentation?  I am typically in a quandary about how to respond.  Seldom do I have the time to do justice to the request.  Now that this chapter will be available, I have something to which to refer them that might answer their questions.

Buy my quandary is more than that.  The truth is that I have difficulty jumping "back into the skin" of a 25 year old on a mission.  Even writing this chapter was not easy.  Fortunately, I had the aid of notes of previous reminiscences to make the task easier.  Expectancy theory was a chapter in my life, not my whole life.  Subsequent events have produced marked changes in my personal agenda.  Some say that I am still "driven" but with different priorities.  In the 1950s and early 1960s, I wore psychology "on my sleeve."  It was the only path to my personal truth.  Business schools and schools of management were, in my mind, lower-class institutions uninitiated in the scientific method.

Perhaps, it was the nine years at Carnegie Mellon or subsequently the thirty plus at Yale helping to found and then teach in their new School of Management that has produced a different frame of mind.  Or, perhaps, it is simply the passage of time that has dimmed the single-minded idealism of youth and replaced it with a more balanced and societal anchored quest.  Forty years of attempting to make the behavioral sciences relevant to present and future managers has made me highly sympathetic to their needs.  Furthermore the science of psychology is no longer my primary goal but is rather a means to the goal of helping managers to better understand themselves, those with whom they work, and the organizations they serve.  I like to think that I have not abandoned the scientific method.  Instead, I have tried to use it in ways to help managers deal with the complexities in their world (Vroom and Yetton, 1973; Vroom and Jago, 1988; Vroom, 2003).

Along with this changed role of science in my life has come an increased impatience with the trappings of formal science.  Often the postulates, assumptions, derivations, and formal mathematical models of my youth seem like a premature attempt to mimic the physical sciences and do little to advance the state of our knowledge, particularly knowledge that is actionable.  Furthermore, I no longer seek one lens or theory that will explain or unify it all.  Pluralism and the interplay of conflicting modes of sense-making have replaced my need for order and convention.  Perhaps the jazz musician and the psychologist have finally come together!




Double-Loop Learning in Organizations: A Theory of Action Perspective
CHRIS ARGYRIS

PG. #261 & 262 ARGYRIS I began my work on organizational behavior by observing several puzzles.  The first was that human beings created policies and practices that inhibited the effectiveness of their organizations.  Why did human beings create and maintain these policies and practices that they judged to be counterproductive?

The second puzzle, human beings reported a sense of helplessness about changing these policies and activities because they were the victims of organizational pressures not to change them?  How did human beings create organizational pressures that inhibited them from changing the phenomena they saw as counter-productive?  Is it possible to help individuals and organizations to free themselves from this apparent self imposed enslavement?  I begin my inquiry with an examination of how action is produced be it productive or counterproductive.  I then examine the role of learning focusing on learning that challenges the existing routines and the status quo.  Next, I present a model of a theory of action that explains the puzzles described above.  This is followed by a description of a theory of action that can be used to resolve these puzzles.  Next, is a description of intervention processes that are derivable from the theory that can be used to get from here to there.  This is followed with discussions of some implications for scholars in developing theory and conducting empirical research that leads to actionable knowledge.  I close with some personal observations of my tribulations over the years while building the theory and conducting research.

PG. #273 - 274 ARGYRIS It is important that scholars take more initiatives in building theories and in conducting empirical research that questions the status quo.  The first reason is that it makes it less likely that scholars will become, unintentionally, servants of the status quo.  The second reason is that identifying possible inconsistencies and inner contradictions is a powerful way to examine our own inconsistencies and inner contradictions.  For example, we espouse that we should describe the universe (that we construct) as accurately and completely as possible.  This means that we should include research on how the universe would act if it was being threatened.  In order to conduct such research we require empirical research on how the existing status quo inhibits learning and produces inner contradictions.  This, in turn, requires the development of testable theories about new and rare universes, which, if implemented, would threaten the status quo.

The long-term commitment to describing the universe as it is inhibits the study of new universes that would encourage liberating alternatives and double-loop learning.  For example, the core concepts of the behavioral theory of firms include the existence of competing coalition rivalries and limited learning.  The limited learning is partially caused by the limited information processing capacity of the human mind.  This claim is, in my opinion, valid.  Another claim that may also be valid is that the competing coalitions (and other organizational defensive routines) may also limit learning.  To my knowledge, scholars have not tested this claim.  More importantly, they appear not to do so because they (e.g., March) express doubts that such factors as mistrust produced by competitiveness can be reduced (see Argyris, 1996).  Burgelman also doubts that organizational defensive routines can be reduced.  He also acknowledges that not testing this claim could have anti-learning and self-sealing consequences (see Argyris, 2004).

Similar questions may be raised about the rules and norms of conducting rigorous empirical research.  For example, the theory-in-use (not espoused theory) of rigorous research is consistent with Model I.  It is the researchers who are in unilateral control.  The result is that the empirical propositions when implemented create a world consistent with Model I (Argyris, 1980, 1993).  For example, studies on communication to generate trust advise that, when communicating to "smart" people, offer them several views.  When communicating to "those judged to be less smart," offer one view.  Implementing this advice requires that the implementer cover up the reasoning behind it.  It also requires that the implementer covers up the cover-up.  None of these consequences are explored by the researchers.  Yet, all of them are consistent with Model I theory-in-use; a theory-in-use that facilitates mistrust.

Studies on frustration and regression conclude that mild frustration actually produces creative reactions.  After a certain frustration threshold is passed, the predicted regression results (Argyris, 1980, 1993).  Let us assume that a leader wishes to generate the creativity predicted during the early stages.  This would mean that she would have to create low to moderate frustration.  It also is likely that she cannot tell the "subjects" of her intention because doing so could lead the subordinates to react negatively to what they may interpret as her manipulation.  If some do not react negatively then she would have created sub-groups that conflict with each other.  In short, the leader would have to cover up that she is covering up.  If there are members of the group who believe that she is covering up, they too may cover up their attributions.  They would place these thoughts and feelings in their left-hand column.  The multilevel cover-up will make it more difficult to assess the arrival of the threshold point beyond which regressions would appear.

All of these issues arise when attempts are made to implement the knowledge produced in the original experiments.  These conclusions appear to hold for humanities research intended to bypass the Model I theory-in-use.  Indeed the same appears to be true for interpretive research where testing stories is a primary methodology (Argyris, 2004).

These and other similar observations (Argyris, 1997) raise doubts that our theories and our research methods are neutral to normative features of everyday life.  The theories and empirical research methodologies are highly influenced by Model I and organizational defensive routines.  They are not neutral whenever social scientists create theories limited to Model I and use research method whose theory-in-use is Model I.  Moreover they get rewarded for doing so by the norms of their scholarly community, they become skillfully unaware of the limits of their claims, especially about neutrality and the promise of a scientific enterprise that does not limit truth-seeking (Miner and Meziac, 1996).

13.4 THE ROLE OF INTERVENTION Intervention is the most effective methodology for empirical research, related to double-loop learning.  Interventions are social experiments where understanding and explanation are in the service of valid implementation intended to be of help.  It is difficult for an interventionist to obtain permission and request cooperation from "subjects" on the claim that the research may be helpful and then stop before providing such help.  The "subjects" would feel betrayed because the promise to be of help includes implementation (Argyris, 2003).  These feelings of betrayal are being built up within society--including congressmen and foundations--by researchers who have promised that they are committed to producing valid and actionable knowledge but who fail to fulfill their promises (Argyris, 1993; Argyris and Schon, 1996; Johnson, 1993).

Interventions require skills for producing internal and external validity.  Such skills can be developed and taught.  Interventionists also need to develop Model II skills if they choose to give implementable validity equal status.  Implementable validity has its own internal and external features.  Internal implementable validity is established by the degree to which the claims in the proposition actually lead to the specified consequences.  For example, it is claimed that Model I theory-in-use is an important cause of organizational defensive routines.  This casual claim can be tested through observations.  External implementable validity is assessed by the extent to which specified organizational defensive routines are reduced when human beings become skilled at Model II theories-in-use.  The former prediction is internal as long as it is not implemented.  The moment we implement the claim the validity of the implementation is external.

 




Where does Inequality Come from? The Personal and Intellectual Roots of Resource-Based Theory
JAY B. BARNEY

 

PG.# 282 BARNEY
In retrospect, this outcome should not have surprised me.  The mythology of equality was so entrenched among those that administered this educational program that they actually lacked the ability to recognize differences among the students.  Giving everyone the same grade was simply their way of making sure "no one got left behind."  Of course, in this Lake Wobegone world where all students are above average, there is also no room for excellence, no room for uniqueness, and no room for distinction.  And, as it turned out, no room for me.  I left he program after one semester.

Thus, to me, questions about the "rightness and wrongness" of inequality have always been central.  Indeed, in many ways, my academic career--and certainly my efforts in helping to develop resource-based theory in the field of strategic management--can be understood as an effort to understand the relationship between these to "theories" of inequality in society--that it is morally bad and that it is both inevitable and can be good.  That I have chosen to confront these issues in the context of business firms is at least partially a matter of chance and good fortune.  I could have chosen to confront these same issues in a very different context, say in the context of the ideological struggle between socialism and capitalism.  Whether we study "Why do some firms outperform others" or "Why do some economic system outperform others," at some level, these are both questions about the causes and consequences of inequality.2



2    My interest was in understanding the causes and consequences in the inequality of outcomes.  In high school, I was less interested in inequality in opportunities since--in my white, middle class high school--inequality in opportunities was not likely to be much of a problem.  However, in retrospect, it seems to me that my high school teachers adopted the same logic that I will describe among SCP scholars--that my heterogeneity in outcomes must reflect heterogeneity in opportunity.  This conclusion only makes sense if people/firms are perfectly homogeneous.



PG.# 285 BARNEY
I included that--at Yale, at least--there was no sociological theory, only a loosely connected set of ideas that were applied to studying a wide variety of disconnected phenomena--the sociology of medicine, the sociology of sport, the sociology of religion.  Sociology had become applied statistics.

A simple story makes this point.  The Ph.D. students in the Sociology Department decided to form a softball team for the graduate school softball tournament.  At the organizing meeting, we had to choose a name of our team.  Here we were, fifteen Ph.D. students in sociology and we couldn't come up with a single uniquely sociological concept which we could use to name our team.  In the end, we decided to call ourselves the "Chi Squares"--we gave up on sociological theory as a source of inspiration and fell back on statistics!5



5    I personally liked the name proposed by a child of one of my fellow students--"The Swords!"



PG.# 295 BARNEY
And, what is personally satisfying is that resource-based theory really is a theory about inequality in society.  While acknowledging that sometimes inequality in outcomes can be inefficient, even evil, resource-based theory's core message is: heterogeneity in outcomes in society is common and natural and is often good for all of us, those who are advantaged as well as those who are disadvantaged.  If firms are "better off" because they are more skilled at addressing customer needs, then this inequality in outcomes is perfectly consistent with maximizing social welfare in society.

PG.#296 BARNEY I remember meeting with a new Ph.D. student who had arrived on campus early and was interested in getting a head start on his reading.  He came to my office and asked me what he should read.  Following the example of Bill Ouchi, I suggested that he read Williamson's Markets and Hierarchies and come back in a few weeks to talk about it.  The student came back with a forty-page summary of Williamson's arguments that he wanted to give to me.  I thanked him, but declined.  My response to him was, "I know you have read the book and can summarize what's in it.  My only question for you now is--what is missing from the book?"  That was a question this new Ph.D. student had not considered.  A week later, we got together again and had a rousing discussion of what Williamson's book did not cover.

For me, personally, if I had not had an in-depth understanding of the new institutional economics, it would have been very difficult for me to contribute to the development of resource-based theory.  This is the case, even though the connections between these sets of ideas are subtle and complex.14  Institutional economics provided me with the tools, but more importantly, a way of thinking about problems, that was instrumental in my resource-based work.  But it was what was missing in institutional economics--a rigorous theory of inequality among competing firms--that led me to think more about resource-based logic.

This said, once one understands the literature, the essential task is to learn to ignore that which you have learned.  Prior literature is both a guide and a blinder.  I have found in my own case that knowing the literature too well can actually prevent me from generating new insights.



14    Indeed, the connection between, say, transactions cost economics and resource-based theory continues to be discussed today.  See, for example, Lieblein and Miller (2003).



PG.# 297 BARNEY
The field of strategic management has become enamored with what I call the "norm of completeness."  This norm suggests that a single paper can develop a new theory, derive specific testable hypotheses from this theory, develop appropriate data and methods to test these hypotheses, report results, and discuss the theoretical implications of these results--all within thirty-two manuscript pages.  This is insane.

Writing papers that meet the norm of completeness generally means that authors have to compromise on some aspect of their paper.  In general, for most of our journals, the part of the paper that gets short shrift is the theory section.  For most empirical work, theory means: Show how your research question is related to a body of previous literature and develop some new hypotheses that typically require no more than a paragraph of justification.  Indeed, it is not too much of an overstatement to say that there is almost no new theory in most empirical papers.

Look at the seminal theoretical papers and books in strategy.  As Bill Ouchi used to say, "The only numbers in these seminal contributions are page numbers."16  Moving too quickly to traditional empirical tests of theory can doom creative efforts.



16    The one exception to this assertion may be Kogut's (1991) paper on real options that developed new and very interesting theory but also had empirical tests.



PG. #299 BARNEY
Of course, I am not appalled if the theories we develop happen to have implications for managers and firms.  Indeed, it is not uncommon that the theories developed by strategy scholars have broad managerial implications.  I consider this a "happy accident."  The reason I develop theory is to solve theory problems, not to solve managerial problems.

I recognize that this perspective contradicts some widely held beliefs about the relationship between business scholars and practitioners.  One of those beliefs is that practitioners typically lead scholars--that the best scholarship describes the actions of practitioners and rationalizes these actions relative to theory.  And, it is certainly the case that empirical research assumes that managers have been behaving in ways consistent with a particular theory in order to generate data consistent with theoretical expectations.

However, in my career, I have met very few managers that are also good theorists.  In fact, they are usually quite bad at it.  For example, ask any successful entrepreneur why they are successful, and they will give some version of the following answer: "I worked hard, I took risks, and I surrounded myself with good people."  Go to a failed entrepreneur and ask what happened, and they will say, "I don't know.  I worked hard, I took risks, and I surrounded myself with good people."  Theory suggests that working hard, taking risks, and surrounding yourself with good people are not sufficient for entrepreneurial success.  Indeed, given the role of luck in entrepreneurial endeavors, such attributes may not even be necessary for entrepreneurial success.  However, few entrepreneurs have broad enough experiences to be able to develop this general theory.




Organizational Effectiveness: Its Demise and Re-emergence through Positive Organizational Scholarship
KIM CAMERON

PG. #305 - 307 CAMERON The earliest models of organizational effectiveness emphasized "ideal types," that is, forms of organization that maximized certain attributes.  Weber's (1947) characterization of bureaucracies is the most obvious and well-known example.  This "rational-legal" form of organization was based on rules, equal treatment of all employees, separation of position from person, staffing and promotion based on skills and expertise, specific work standards, and documented work performance.  These principles were translated into dimensions of bureaucracy, including formalization of procedures, specialization of work, standardized practices, and centralization of decision making (Perrow, 1986).

Early applications of the bureaucratic model to the topic of effectiveness proposed that efficiency was the appropriate measure of performance--i.e., avoidance of uncoordinated, wasteful, ambiguous activities.  That is, the more nearly an organization approached the ideal bureaucratic characteristics, the more effective (i.e., efficient) it was.  The more specialized, formalized, standardized, and centralized, the better.

Subsequent scholars challenged these assumptions, however, suggesting that the most effective organizations are actually non-bureaucratic.  Barnard (1938), for example, argued that organizations are cooperative systems at their core.  An effective organization, therefore, channels and directs cooperative processes to accomplish productive outcomes, primarily through institutionalized goals and decision making processes.  Barnard's work led to three additional ideal type approaches to organization--Selznick's (1948) institutional school, Simon's (1956) decision making school, and Roethlisberger and Dickson's (1947) human relations school.  Each of these schools of thought represents an ideal to which organizations should aspire--e.g., shared goals and values, systematic decision processes, or collaborative practices.  Whereas devotees disagreed over what the ideal standard must be for judging effectiveness, all agreed that effectiveness should be measured against an ideal standard represented by the criteria.

Over the years, ideal types proliferated, including goal accomplishment (Price, 1982), congruence (Nadler and Tushman, 1980), social equity (Keeley, 1978), and interpretation systems (Weick and Daft, 1983).  However, mounting frustration over the conflicting claims of ideal type advocates gave rise to a "contingency model" of organizational effectiveness.  This perspective argued that effectiveness is not a function of the extent to which an organization reflects qualities of an ideal profile but, instead, depends on the match between an organization's attributes and its environmental conditions.

Burns and Stalker's (1961) differentiation between organic and mechanistic organizational types represents an early bridge from ideal type to contigency models.  These authors argued that mechanistic organizations (e.g., those reflecting Weber's bureaucratic dimensions) are best suited to highly stable and relatively simple environments.  In contrast, organic organizations (e.g., those reflecting Barnard's cooperative dimensions) are better suited to rapidly changing, highly complex situations.  This idea spawned several significant research programs, all based on a contingency view of effectiveness--Lawrence and Lorsch's (1967) study of multiple industries in which differentiation and integration were predictive of effectiveness, the Aston studies in England (Pugh, Hickson, and Hinings, 1969) in which structural arrangements were predictive of effectiveness, and Van de Ven and Ferry's (1980) development of the Organizational Assessment Survey in which different processes and design features were predictive of effectiveness.  All these studies concluded that evaluations of effectiveness differed depending on environmental circumstances.  Complex and changing environments give rise to different appropriate effectiveness criteria than do stable and undemanding environments.

A third shift occurred in the conception of organizations as economists and organizational theorists became interested in accounting for transactions across organizational boundaries and their interactions with multiple constituencies.  This emphasis highlighted the relevance of multiple stakeholders in accounting for an organization's performance (e.g., Williamson, 1983; Connolly, Conlon, and Deutsch, 1980; Zammuto, 1984).  Effective organizations were viewed as those which had accurate information about the demands and expectations of strategically critical stakeholders and, as a result, adapted internal organizational activities, goals, and strategies to match those demands and expectations.  This viewpoint held that organizations are elastic entities operating in a dynamic force field which pulls the organization's shape and practices in different directions--i.e., molding the organization to the demands of powerful interest groups including stockholders, unions, regulators, competitors, customers, and so forth.  Effectiveness, therefore, is a function of qualities such as learning, adaptability, strategic intent, and responsiveness.




Managerial and Organizational Cognition: Islands of Coherence
ANNE S. HUFF

 

PG. #332 HUFF The foundation for understanding managerial and organizational cognition (MOC) was laid in the 1980s.  The Thinking Organization (1986), edited by Sims and Gioia, was an important early landmark that showed how management scholars were applying a cognitive perspective to a broad range of management subjects.  I wanted the book I edited, Mapping Strategic Thought (1990), to be the next major landmark.  It provides a hierarchy for organizing work in the field, and ties that organizing framework to a set of available methodologies.  Key concepts from this book and other work I did in the "golden era" of MOC research are summarized in the first part of this chapter.  The second part describes how I moved from thinking about cognition as the central aspect of strategic decision making to making cognition the anchor of a broader attempt to understand strategic action.  This transition is part of a general shift in strategy and organization theory toward dynamic models.  I suggest that we could be entering a new era of enthusiasm for cognitive research because of the requirements of these models.

My research interests and objectives have been informed by others' work, and I am particularly aware of the influence of people at the University of Illinois, one of the important centers of cognitive research (in management and in other fields) in the 1980s.  It is not possible to describe MOC in detail in this chapter, but it is interesting to relate a brief summary of MOC to descriptions of scholarly development from the philosophy of science, which I do toward the end of the chapter.  That leads to some advice for readers in the conclusion.

PG. #345 & 346 HUFF 16.4 LINK TO PHILOSOPHY OF SCIENCE The editors of this book ask that authors relate their own theory building efforts to accounts from the philosophy of science.  I have been particularly influenced by the work of Thomas Kuhn (1970).  It seems obvious to me that Kuhn's emphasis on a "paradigm" as an organizing collection of shared assumptions and practices was strongly influenced by emerging cognitive science.  Furthermore, I believe widespread references to Kuhn in management studies are due at least in part to familiarity with the idea of schematic frameworks.

Most of the observations in this chapter can be put into a Kuhnian framework: Cognitive science as a field was developing a strong paradigm around schema theory in the 1970s.  Work in MOC drew on this source, but was developing its own interests and methods as a subfield in the subsequent decades.  The MOC division in the Academy of Management provided an important forum for regular interaction, and usefully promoted both methodological and theoretical discussions.  Similar but distinctive meetings were being held in Europe, with enough international travel to enrich the worldwide gene pool of research ideas.

My mapping book was an attempt to contribute to theoretic arguments in this field as well as codify tools and methods.  The book was strengthened by knowledge of and discussion of research activities at the University of Illinois, especially in the business school, but also in psychology and other fields.  Other strong centers for cognitive research, especially at New York University, Penn State, Cranfield University, Bath, and Strathclyde provided other hospitable climates.

Although all of this is compatible with Kuhn's account of paradigmatic science, the historical development of MOC also refutes some aspects of his account.  In particular, the development of theory has been less coherent than a reading of Kuhn might suggest.  Many opportunities for sustained conversation, even in the areas of environmental interpretation and competitor analysis where work has been most concentrated, have not fully developed.  In part this seems to be due to a strong desire for independence, which decreases desirable cross-citation, and lures many individuals into new directions before they fully develop their current projects.  Cumulative activity also seems to be weakened by journals that encourage claims of independent discovery.  But neither of these seem to be sufficient explanation.




Developing Theory about the Development of Theory
HENRY MINTZBERG

 

PG. #355 & 356 MINTZBERG I have no clue how I develop theory.  I don't think about it; I just try to do it.  Indeed, thinking about it could be dangerous:

The centipede was happy quite
Until a toad in fun
Said, "Pray, which leg gores after which?"
That worked her mind to such a pitch,
She lay distracted in a ditch
Considering how to run.
(Mrs. Edward Craster, 1871)

I have no desire to lay distracted in a ditch considering how to develop theory.  Besides, that's the work of cognitive psychologists, who study concept attainment, pattern recognition, and the like, but never really tell us much about how we think.  Nonetheless, I'll take the bait, this once, at the request of the editors of this book, because I probably won't get far either.

I want to start with what theory isn't and then go on to what theory development isn't, for me at least, before turning, very tentatively, to what they seem to be.

17.1 WHAT THEORY ISN'T: TRUE It is important to realize, at the outset, that all theories are false.  They are, after all, just words and symbols on pieces of paper, about the reality they purport to describe; they are not that reality.  So they simplify it.  This means we must choose our theories according to how useful they are, not how true they are.  A simple example will explain.

In 1492, we discovered truth.  The earth is round, not flat.  Or did we?  Is it?

To make this discovery, Columbus sailed on the sea.  Did the builders of his ships, or at least subsequent ones, correct for the curvature of the sea?  I suspect not; to this day, the flat earth theory works perfectly well for the building of ships.

But not for the sailing of ships.  Here the round earth theory works much better.  Otherwise, we would not have heard from Columbus again.  Actually that theory is not true either, as a trip to Switzerland will quickly show.  It is no coincidence that is was not a Swiss who came up with the round earth theory.  Switzerland is the land of the bumpy earth theory, also quite accurate--there.  Finally, even considered overall, say from a satellite, the earth is not round; it bulges at the equator (although what to do with this theory I'm not sure).

If the earth isn't quite round or flat or even even, then how can we expect any other theory to be true?  Donald Hebb, the renowned psychologist, resolved this problem quite nicely: "A good theory is one that holds together long enough to get you to a better theory."

But as our examples just made clear, the next theory is often not better so much as more useful for another application.  For example, we probably still use Newton's physics far more than that of Einstein.  This is what makes fashion in the social sciences so dysfunctional, whether the economists' current obsession with free markets or the psychologists' earlier captivation with behavioralism.  So much effort about arm's lengths and salivating dogs.  Theory itself may be neutral, but the promotion of any one theory as truth is dogma, and that stops thinking in favor of indoctrination.

So we need all kinds of theories--the more, the better.  As researchers, scholars, and teachers, our obligation is to stimulate thinking, and a good way to do that is to offer alternate theories--multiple explanations of the same phenomena.  Our students and readers should leave our classrooms and publications pondering, wondering, thinking--not knowing.

PG. #358 & 359 MINTZBERG But it does so much of the time, because we confuse rigor with relevance, and deduction with induction.  Indeed the proposal I received for this very book did that: "...the process of theory building and testing is objective and enjoys a self-correcting characteristic that is unique to science.  Thus the checks and balances involved in the development and testing of theory are so conceived and used that they control and verify knowledge development in an objective manner independent of the scientist."  They sure do: that is why we see so little induction in our field, the creation of so little interesting theory.

Kark Popper, whose name a secretary of mine once mistyped as "Propper," wrote a whole book about The Logic of Scientific Discovery (1959).  In the first four pages (27-30), in a section entitled "The Problem of Induction," he dismissed this process, or more exactly what he called, oxymoronically, "inductive logic."  Yet with regard to theory development itself, he came out much as I did above.

The initial stage, the act of conceiving or inventing a theory, seems to me neither to call for logic analysis not to be susceptible of it.  The question how it happens that a new idea occurs to a man--whether it is a musical theme, a dramatic conflict, or a scientific theory--may be of great interest to empirical psychology; but it is irrelevant to the logical analysis of scientific knowledge.  This latter is concerned not with questions of fact (Kant's quid facti?), but only with questions of justification or validity (Kant's quid juris?)...Accordingly, I shall distinguish sharply between the process of conceiving a new idea, and the methods and results of examining it logically.  (Popper, 1959: 31)

Fair enough.  But why, when he devoted the rest of his book to "the deductive method of testing" (p. 30), did Popper title his book "The Logic of Scientific Discovery"?  What discovery is there in deduction?  Maybe something about the how, why, when, and where of given theory (as noted earlier), but not the what--not the creation of the theory itself.  (Indeed why did Popper call his book The Logic of Scientific Discovery when in the passage above he used, more correctly, the phrase "scientific knowledge"?)  And why have untold numbers of researchers-in-training been given this book to read as if it is science, and research, when it is only one side of these, and the side wholly dependent on the other, which is dismissed with a few words at the beginning?  What impression has that left on doctoral students in our fields?  (Read the journals.)  As Karl Weick (1969: 63) quoted Somerset Maugham, "She plunged into a sea of platitudes, and with the powerful breast stroke of a channel swimmer made her confident way toward the while cliff of the obvious."

Popper devoted his book to deductive research for the purposes of falsifying theories.  But as noted earlier, falsification by itself adds nothing; only when it is followed by the creation of new theories or at lest the significant adaptation of old ones do we get the necessary insights.  As Alfred Hirshman put it, "A model is never defeated by the facts, however damaging, but only by another model."

PG. #361 MINTZBERG 17.4 WHAT THEORY DEVELOPMENT SEEMS TO BE: UNEXPECTED We get interesting theory when we let go of all this scientific correctness, or to use a famous phrase, suspend our disbeliefs, and allow our minds to roam freely and creatively--to muse like mad, albeit immersed in an interesting, revealing context.  Hans Selye, the great physiologist, captured this sentiment perfectly in quoting one item on a list of "Intellectual Immoralities" put out by a well-known physiology department: "Generalizing beyond one's data."  Selye quoted approvingly a commentator who asked whether it would have been more correct for this to read: "Not generalizing beyond one's data" (1964: 228).  No generalizing beyond the data, no theory.  And no theory, no insight.  And if no insight, why do research?

Theory is insightful when it surprises, when it allows us to see profoundly, imaginatively, unconventionally into phenomena we thought we understood.  To quote Will Henry, "What is research but a blind date with knowledge."  Not matter how accepted eventually, theory is of no use unless it initially surprises--that is, changes perceptions.  (A professor o mine once said that theories go through three stages: first they're wrong; then they're subversive; finally they're obvious.)

All of this is to say that there is a great deal of art and craft in true science.  In fact, an obsession with the science, narrowly considered, gets in the way of scientific development.  To quote Berger, "In science, as in love, a concentration on technique is likely to lead to impotence" (1963: 13).




Managing Organizational Knowledge: Theoretical and Methodological Foundations

IKUJIRO NONAKA

 

PG. #376 & 377 NONAKA 18.1 KNOWLEDGE/TRUTH Knowledge has been traditionally defined as "justified true belief."  A fundamental issue in various streams of epistemology is how one can justify one's subjective belief as objective "truth."  In other words, the issue is whether human beings can ever achieve any form of knowledge that is independent of their own subjective construction since they are the agents through which knowledge is perceived or experienced (Morgan and Smircich, 1980).  While the ontological position of positivism as the world as concrete structures supports objective knowledge, the phenomenological philosophers see part of the world inherently subjective.

The Cartesian split and power of reasoning supports the view of objective knowledge and truth in positivism.  John Locke, among the others, maintained that human knowledge is an inner mental presentation (or mirror image) of the outside world that can be explained in linguistic signs and mathematics through reasoning.  All things beyond the thought/senses consequently do not exist and/or are irrelevant.  Loosely following this conceptualization, traditional economic and psychological theories are limited to objective knowledge, which can be processed through formal logic and tested empirically.  The advantage of this mono-dimensional notion of knowledge is that it allows scholars further to claim that all genuine human knowledge is contained within the boundaries of science.

In contrast, for phenomenological philosophers knowledge is subjective, context-specific, bodily, relative, and interpretational (Heidegger, 1962; Husserl, 1970, 1977; Merleau-Ponty, 1962).  They rather uniformly claim that the mental and the physical worlds evolve in a dialectic joint advent.  As meanings emerge through experiences, the primacy is paid on subjective tacit knowledge over objective prepositional knowledge.  Practical knowledge is often prioritized over theoretical knowledge (Hayek, 1945; Polanyi, 1952, 1966).  Tacit knowledge, accumulated in dialectic individual-environment interaction, is very difficult to articulate (Polanyi, 1952, 1966).  Husserl (1977) believed in attaining true knowledge through "epoche" or "bracketing," that is, seeing things as they are and grasping them through a kind of direct insight.  Pure phenomenological experience is even claimed to precede cognition (Nishida, 1970).

The identified wide and fundamental ontological and epistemological differences in positivism and phenomenology create methodological challenges.  It can be claimed that the positivist dominance has limited comprehensive context-specific discussions on knowledge in management science.  This problem was already noticed by Edith Penrose (1959) who argued that the relative negligence was the result of the difficulties involved in taking knowledge into account.  This is because positivist epistemology is based on the assumption that lived experiences can be linguistically carved up and conventionally portioned into preexistent conceptual categories for the purposes of systematic analysis and casual attribution.  In effect, positivism-based social science tries to freeze-frame the dynamic and living social world into a preexisting static structure.

In contrast to the context-free positivist mirror image of human mind and the environment, the knowledge-creating theory is rooted on the belief that knowledge inherently includes human values and ideals.  The knowledge creation process cannot be captured solely as a normative causal model because human values and ideals are subjective and the concept of truth depends on values, ideals, and contexts.

However, the knowledge-creating theory does not view knowledge as solely subjective.  It treats knowledge creation as a continuous process in which subjective tacit knowledge and objective explicit knowledge are converted into each other (Nonaka, 1991, 1994; Nonaka and Takeuchi, 1995).  The boundaries between explicit and tacit knowledge are porous as all knowledge and action is rooted in the tacit component (Tsoukas, 1996).  Tacit knowledge, in turn, is built partly on the existing explicit knowledge since tacit knowledge is acquired through experiences and observations in the physical world.

Viewing the knowledge-creating process as the conversion process between tacit and explicit knowledge means that it is viewed as the social process of validating truth (Nonaka, 1994; Nonaka and Takeuchi, 1995).  Contemporary philosophers claim that group validation produces knowledge that is not private and subjective (Rorty, 1979).  As long as the knowledge stays tacit and subjective, it can be acquired only through direct sensory experience, and cannot go beyond one's own values, ideals, and contexts.  In such a case, it is hard to create new knowledge or achieve universality of knowledge.  Through the knowledge conversion process, called SECI process, a personal subjective knowledge is validated socially and synthesized with others' knowledge so that knowledge keeps expanding (Nonaka and Takeuchi, 1995).

Unlike positivism, the knowledge-creating theory does not treat knowledge as something absolute and infallible.  The truth can be claimed to be incomplete as any current state of knowledge is fallible and influenced by historical factors such as ideologies, values, and interests of collectives.  The knowledge-creating theory views knowledge and truth as the result of a permanent and unfinished questioning of the present.  While absolute truth may not be achieved, the knowledge validation leads to ever more true and fewer false consequences, increasing plausibility.  The pragmatic solution is to accept collectively "objectified" knowledge as the "truth" because it works in a certain time and context.  Hence, knowledge-creating theory defines knowledge as a dynamic process of justifying personal belief towards the "truth."

PG. #390 NONAKA The chapter argues that building the theory of knowledge creation needs to an epistemological and ontological discussion, instead of just relying on a positivist approach, which has been the implicit paradigm of social science.  The positivist rationality has become identified with analytical thinking that focuses on generating and testing hypotheses through formal logic.  While providing a clear guideline for theory building and empirical examinations, it poses problems for the investigation of complex and dynamic social phenomena, such as knowledge creation.  In positivist-based research, knowledge is still often treated as an exogenous variable or distraction against linear economic rationale.  The relative lack of alternative conceptualization has meant that management science has slowly been detached from the surrounding societal reality.  The understanding of social systems cannot be based entirely on natural scientific facts.




The Experience of Theorizing: Sensemaking as Topic and Resource

KARL E. WEICK

 

PG. #395 WEICK 19.1 ON SENSEMAKING Sensemaking, viewed as central both to the process of theorizing and to the conduct of everyday organizational life, is a sprawling collection of ongoing interpretive actions.  To define this "sprawl" is to walk a thin line between trying to put plausible boundaries around a diverse set of actions that seem to cohere, while also trying to include enough properties so that the coherence is seen as distinctive and significant but something less than the totality of the human condition.  This bounding is a crucial move in theory construction.  It starts early, but it never stops.  Theorizing involves continuously resetting the boundaries of the phenomenon and continuously rejustifying what has newly been included and excluded.  In theorizing, as in everyday life, meanings always seem to become clear a little too late.  Accounts, cognitions, and categories all lie in the path of earlier action, which means that definitions and theories tend to be retrospective summaries of ongoing inquiring rather than definitive constraints on future inquiring.  These complications are evident in efforts to define sensemaking.

Some portraits of sensemaking suggest that it resembles an evolutionary process of blind variation and selective retention.  "An evolutionary epistemology is implicit in organizational sensemaking, which consists of retrospective interpretations built during interaction" (Weick 1995b: 67).  Hence we see sensemaking being aligned with the insight that "a system can respond adaptively to its environment by mimicking inside itself the basic dynamics of evolutionary processes" (Warglien, 2002, 110), an insight that is tied directly to theory development when theorizing is described as "disciplined imagination" (Weick, 1989).

PG. #405 WEICK The "known facts" and "empirical findings" that theories "explain" can precede theory construction or follow it.  The fact that theory construction is a form of retrospective sensemaking, does not decouple it from facts.  Rather, it means that facticity is often an achievement.  Having first said something, theorists discover what they have been thinking about when they look more closely at that talk.  A close look at the talk often suggests that the talk is about examples, experiences, and stories that had previously been understood though not articulated.  The talk enacts facts because it makes that understanding visible, explicit, and available for reflective thinking, but the talk doesn't create the understanding.  Instead, it articulates the understanding by converting "know how" into "know that."  Sensemaking, with its insistence on retrospective sensemaking, is a valuable standpoint for theorizing because it preserves the proper order for understanding and explanation (understanding precedes explanation: Sandelands, 1990: 241-247).  It reminds the investigator to keep saying and writing so that he or she can have something to see in order then to think theoretically.

PG. #406 WEICK This is not haphazard as it sounds.  Instead, these stop rules for theory simply recognize that theories are coherent orientations to events, sets of abstractions, consensually validated explanations and embodiments of aphoristic thinking.

Reber's definition is also intriguing because it talks about theory as a label that is "awarded" to almost any honest attempt at explanation.  Here we get a hint that theory is a continuum and an approximation.  The image of theory as continuum comes from Runkel.

Theory belongs to the family of words that includes guess, speculation, supposition, conjecture, proposition, hypothesis, conception, explanation, model.  The dictionaries permit us to use theory for anything from "guess" to a system of assumptions...(Social scientists) will naturally want to underpin their theories with more empirical data than they need for a speculation.  They will naturally want a theory to incorporate more than one hypothesis.  We plead only that they do not save theory to label their ultimate triumph, but use it as well to label their interim struggles.  Runkel and Runkel, 1984; 130)

As we have seen, most products that are labeled theory actually approximate theory.  Robert Merton (1967: 143-149) was sensitive to this point and suggested that there were at least four ways in which theory was approximated.  These were (1) general orientation in which broad frameworks specify types of variables people should take into account without any specification of relationships among these variables (e.g., Scott, 1998 analyzes rational, natural, and open systems); (2) analysis of concepts in which concepts are specified but not interrelated (Perrow, 1984 analyzes the concept of normal accident); (3) post factum interpretation in which ad hoc hypotheses are derived from a single observation, with no effort to explore new observations or alternative explanations (e.g., Weick, 1990 analyzes behavioral regression in the Tenerife air disaster); and (4) empirical generalization in which an isolated proposition summarizes the relationship between two variables, but further interrelations are not attempted (e.g., Pfeffer and Salancik, 1977) analyze how power flows to those who reduce significant uncertainties.




The Development of Stakeholder Theory: An Idiosyncratic Approach

R. EDWARD FREEMAN

 

PG. #422 FREEMAN During this time, I began to work with Professor William Evan, a distinguished sociologist at Penn.  I was very flattered when Evan called me one day and asked to meet to discuss the stakeholder idea.  Evan saw this project as a way to democratize the large corporation.  Even though he was an impeccable empirical researcher, he immediately saw the normative implications of coming to see business as "serving stakeholders."  We began to meet weekly and talk about how to do the "next project" after Strategic Management: A Stakeholder Approach, even though that project wasn't yet finished.  We began an empirical study aimed at seeing how Chief Executive Officers made trade-offs among stakeholders and we began to plan a book that would deal with the normative implications of reconceptualizing the corporate governance debate in stakeholder terms.  While we never finished the book, we did complete a number of essays, one of which is reprinted countless times in business ethics textbooks.  What I learned from Bill Evan was invaluable: to be the philosopher that I was, rather than some positivist version of a social scientist.  Evan gave me the courage to tackle the normative dimension, in an intellectual atmosphere, the modern twentieth-century Business School that had disdain for such analysis.

In summary, I spent most of my time from 1978 until 1982 teaching executives and working with them to develop very practical ways of understanding how they could be more effective in the relationships with key stakeholders.  In the summer of 1982, I sat down at my home in Princeton Junction, New Jersey and drafted the initial manuscript of Strategic Management: A Stakeholder Approach.  I tried to set forth a method or set of methods/techniques for executives to use to better understand how to manage key stakeholder relationships.  In addition, I wanted to track down the origins of the stakeholder idea, and give credit to its originators and the people whose work I had found so useful.

PG. #432 & 433 FREEMAN

    Open questions remain.  For instance:

  1. Is there a useful typology of enterprise strategy or answers to questions of purpose?
     

  2. How can we understand the relationship between fine-grained narratives of how firms create value for stakeholders, and the idea of stakeholder theory as a genre or set of loosely connected narratives?
     

  3. If we understand business, broadly, as "creating value for stakeholders' what are the appropriate background disciplines?  And, in particular what are the connections between the traditional "social sciences" and "humanities"?
     

  4. How can the traditional disciplines of business such as marketing and finance develop conceptual schemes that do not separate "business" from "ethics" and can the stakeholder concept be useful in developing these schemes?
     

  5. If we understand "business," broadly, as "creating value for stakeholders," under what conditions is value creation stable over time?
     

  6. Can we take as the foundational question of political philosophy, "how is value creation and trade sustainable over time" rather than "how is the state justified"?

I am certain that there are many additional research questions, and many more people working on these questions than I have mentioned here.  I hope this paper has clarified some of my own writing in the stakeholder area, and provoked others to respond.

If I try to summarize the lessons for management theorists of the development of stakeholder theory they would be four.  First, don't underestimate the role of serendipity and context.  My role would have been very different, indeed probably nonexistent, if a few key life events had unfolded differently.  Second, don't underestimate the contributions of others.  Really, my own contribution has been to try and synthesize the contributions of many others.  I am always amused and somewhat horrified when I'm at a conference and am introduced as the "father of stakeholder theory."  Many others did far more work, and more important work than I did, and that continues today as stakeholder theory unfolds in a number of fields.  Third, pay attention to the real world of what managers, executives, and stakeholders are doing and saying.  Our role as intellectuals is to interpret what is going on, and to give better, more coherent accounts of management practice, so that ultimately we can improve how we create value for each other, and how we live.  That, I believe is a kind of pragmatist's credo.  Finally, surely the author has a role in management theory.  Overemphasis on reviews, reviewers, revisions, and the socialization of the paper-writing process can lead to a kind of collective group think.  I believe that I could not have published the work in Strategic Management: A Stakeholder Approach as a set of A-journal articles.  By publishing a book, I managed to create a voice, building heavily on the voices of others that could express a point of view.  I believe that in today's business school world, that is much more difficult, and that we need to return to a more ancient idea of the author in management theory.




Developing Resource Dependence Theory: How Theory is Affected by its Environment

JEFFREY PFEFFER

 

PG. #453 & 454 PFEFFER 21.5 THE POLITICS OF THEORY IN THE SOCIAL SCIENCES There are, I believe, many misconceptions about theory and theory development in the organization and social sciences, particularly on the part of younger scholars.  In concluding this discussion of the development and evolution of resource dependence theory, it is useful to both review these beliefs and see how they play out in understanding the growth and development of resource dependence.

The first, most strongly held, and possibly most harmful mistaken belief is that theories succeed or fail, prevail or fall into disuse, primarily, and some would maintain exclusively, on the basis of their ability to explain or predict the behavior that is the focus of the theory.  Moreover, there is a belief that a theory's success in prediction and explanation is particularly important in explaining its success if there are competitive theories covering the same dependent variables.  This belief is erroneous in at least two ways.

First of all, as argued elsewhere (Ferraro, Pfeffer, and Sutton, 2005), theories may create the environment they predict, thereby becoming true by construction rather than because they were originally veridical with the world they sought to explain.  To the extent people believe in a particular theory, they may create institutional arrangements based on the theory that thereby bring the theory into reality through these practices and institutional structures.  To the extent people hold a theory as true, they will act on the basis of the theory and expect others to act on that basis also, creating a normative environment in which it becomes difficult to not behave on the basis of the theory because to do so would violate some implicit or explicit expectations for behavior.  And to the extent that people adhere to a theory and therefore use language derived from and consistent with the theory, the theory can become true because language primes both what we see and how we apprehend the world around us, so that talking using the terminology of a particular theory also makes the theory become true.

Second, the philosophy of science notwithstanding, theories are quite capable of surviving disconfirming evidence.  Behavioral decision theory and its numerous empirical tests have shown that many of the most fundamental axioms of choice and decision that underlie economics are demonstrably false (e.g., Bazerman, forthcoming), but economics is scarcely withering away.  Nor are the specific portions of economic theory predicated on assumptions that have been shown to be false necessarily any less believed or used.  A similar situation is true in finance, where assumptions of capital market efficiency and the instantaneous diffusion of relevant information, so that a security's market price presumably incorporates all relevant information available at the time, have withstood numerous empirical and theoretical attacks.  To take a case closer to organization studies, the reliance on and belief in the efficacy of extrinsic incentives and monetary rewards persists not only in the lay community but in the scholarly literature as well.  So, Heath's (1999) insightful study of what he terms an extrinsic incentives bias is as relevant to the domain of scholars as it is to practicing managers and lay people.

What this means for resource dependence theory is that to the extent that claims that it is virtually dead (Carroll, 2002) are true and that it has been subsumed by transactions cost theory, this state of affairs may say less than one might expect about the comparative empirical success or theoretical coherence of transactions cost theory.  As David and Han (2004: 39) summarized in their review of sixty-three articles empirically examining transaction cost economics, "we...found considerable disagreement on how to operationalize some of TCE's central constructs and propositions, and relatively low levels of empirical support in other core areas."  Instead, the comment about the relative position of resource dependence and transactions cost theory may say more about the politics of social science and the fact that power is currently out of vogue and efficiency and environmental determinism such as that propounded by population ecology and other perspectives reifying an impersonal environment, with all of their conservative implications, is currently more in favor.




 

Transaction Cost Economics: The Process of Theory Development

OLIVER E. WILLIAMSON

 

PG. #485 & 486 WILLIAMSON Transaction cost economics is an interdisciplinary research project in which law, economics, and organization theory are joined (Williamson, 1985).  Although the operationalization of transaction cost economics began in the 1970s and has continued to develop in conceptual, theoretical, empirical, and public policy respects since, many of the key ideas out of which transaction cost economics (TCE) works have their origins in path-breaking contributions in law, economics, and organization theory from the 1930s.  It was not, however, obvious how these key ideas were related, much less how they could be fruitfully combined.  Two follow-on developments--the interdisciplinary program for doing social science research that took shape at the Graduate School of Industrial Administration (GSIA) at Carnegie-Mellon University during the late 1950s and early 1960s; and new developments in the market failure literature during the 1960s--were needed to set the stage.1

As for my own involvement, I seriously doubt that I would have perceived the research opportunity presented by TCE but for my training in the Ph.D. program at GSIA (from 1960 to 1963).2  More than such training, however, would be needed.  My teaching, research, and public policy experience during the decade of the 1960s all served to alert me to the research needs and opportunities posed by TCE.

This chapter is organized in seven parts.  Section 23.1 describes seminal contributions from the 1930s.  Follow-on developments in the 1960s are examined in 23.2.  My training, teaching, research, and involvement with public policy during the decade of the 1960s are sketched in 23.3.  The foregoing led into what, for me, was a transformative research project: my paper on "The Vertical Integration of Production: Market Failure Considerations" (1971), which is described in 23.4.  Some reflections on TECe as it has evolved since are set out in 23.5.  I discuss the "Carnegie Triple"--be disciplined, be interdisciplinary; have an active mind--in 23.6.  Concluding remarks follow.


  The operationalization of TCE is the result of the concerted effort of many contributors.  A selection of some of the more influential articles can be found in Williamson and Masten, Transaction Cost Economics, Vols. 1 and II (1995).  Also see Claude Menard (2005).

2    For an autobiographical sketch of earlier events and people that were influential to my training and intellectual development, see Williamson (1995).  Although good instincts helped me to make the "right choices" at critical forks in the road, I also had the benefit of a number of exceptional advisors and teachers--and fortunately often had the good sense to listen.




Developing Evolutionary Theory for Economics and Management

SIDNEY G. WINTER

PG. #509 & 510 WINTER In the spring of 1959, chance events let me to read a 1950 paper by Armen Alchian, entitled "Uncertainty, Evolution and Economic Theory" (Alchian, 1950).  At the time, I was trying to do a dissertation featuring an empirical analysis of the determinants of corporate spending on research and development.  R&D had become quite a hot topic in applied economics after the mid-1950s.  The theoretical framework that I had planned to use in this investigation was a model based on the familiar concept of the profit-maximizing firm, a core theoretical commitment of mainstream economics then and now.  But, at the time of the fortuitous encounter with the Alchian paper, I had become concerned that my model of profit-maximizing R&D spending related to a decision situation that did not actually exist, at least not in any form resembling the context-free one that the model addressed.

Reading Alchian, I saw that an evolutionary approach on the theoretical front might offer a promising way to address satisfactorily a set of otherwise bothersome facts: (1) business discourse on R&D intensity seemed to be anchored on some notion of an appropriate R&D-to-sales ratio; (2) firm R&D decisions of any particular year were strongly shaped and constrained by decisions and their consequences from previous years; (3) incremental changes in policy nevertheless occurred, and had in fact accumulated over time into a pattern of significant and persistent inter-industry differences in R&D intensity; and (4) sustained pressures from the economic and technological environment seemed to play a shaping role in the emergence of those inter-industry differences.  Such was the starting point of my long odyssey with evolutionary thinking.

That personal journey is now half way through its fifth decade.  More than three decades have passed since Richard Nelson and I published our first collaborative papers on evolutionary economics, and more than two since we presented a major statement of our theory in An Evolutionary Theory of Economic Change (Nelson and Winter, 1982a).  Needless to say, there have been a number of significant twists and turns along the way.  In particular, the opportunity to present this chapter in a volume devoted to management theory reflects developments that certainly were not anticipated in the early stages.  From its original status as a possible solution to my specific problem with R&D spending, the evolutionary approach quickly became the basis of an attempt at major reform in economic theory.  That it remained, though the scope became even broader, as the collaboration with Nelson began.  A contribution to management theory was not on the program.

Nevertheless, the logic of the connection to management is clear enough.  As my subsequent discussion here explains, one of the key advantages of the evolutionary approach is that it offers liberation from overly stylized theoretical accounts of business behavior.  Alternatively, one might say that the evolutionary approach embraces the realities of business decision making rather than shrinking defensively from them (exactly the choice posed in my encounter with the question of R&D spending).  It, thereby, makes room for managers in the economic account of business behavior, and at the same time offers a style of economic thinking that is more interesting and potentially helpful to managers.  In both directions of that traffic, the words "technology," "organization," and "change" are prominent, along with "management" and "evolution."  A considerable portion of this promise has been realized, thanks in great part to the number of other scholars who have shared this vision, or pieces of it, and sought to bring it to realization.  Major opportunities still lie before us.

PGS. 511 - 514 WINTER 24.2 "REALISM," MAXIMIZATION, AND THE THEORY OF THE FIRM The Friedman paper mentioned above soon supplanted the Alchian paper as the main focus of my early thinking about economic evolution, but Alchian's work remained a fundamental guide in one key respect.  Alchian had proposed a reconstruction of economic theory on evolutionary principles, and plausibly sketched some key elements of such a program.  That idea appealed to me, but it certainly was not what Friedman was up to.Friedman's essay, "The Methodology of Positive Economics," appeared as the first chapter of his Essays in Positive Economics (Friedman, 1953).  In large part, it was Friedman's response to a lively scholarly controversy about the profit maximization assumption that had emerged in the 1940s.  The critics complained that the assumption was not realistic, and some of them cited evidence from close-in observation of business behavior to back their claims.2  Friedman argued that the critics suffered from a simplistic understanding of what "realism" meant in science.  He also put forward arguments about why profit maximization might be a "fruitful hypothesis" in spite of apparent conflicts with direct observation--scorning the latter with the comment "A fundamental hypothesis of science is that appearances are deceptive" (p. 33).  One of his supportive arguments for profit maximization as a scientific hypothesis was an evolutionary "natural selection" argument that concluded with these words:

The process of "natural selection" thus helps to validate the hypothesis--or rather, given natural selection, acceptance of the hypothesis can be based largely on the judgment that it summarizes appropriately the conditions for survival.  (1953:22)

The critical assessment of this proposition--which I have come to call "the Friedman conjecture"--became the central theme of my dissertation research, at a rather late stage in the year that I was supposedly devoting to the dissertation.  The study of corporate R&D spending was never completed; the theoretical puzzle it presented was recast as an example of a much larger puzzle about the general representation of business behavior in economic theory, and about profit maximization in particular.  The topics of R&D and technological change were set aside, but the early concern with these issues was a portent of things to come in the development of evolutionary economics.

As Friedman' essay explained quite well, every science faces the challenge of finding ways to makes its theoretical concepts operational, thus building a bridge from a theory to a set of facts that might be expected to throw light on the merit of the theory.  Just how this "light-throwing" works is not obvious.  It is actually a deep and sometimes contentious issue, though elementary accounts of the scientific method often posit a simple and reassuring answer.  One particular puzzle concerns the appropriateness of leaving a theoretical term without any direct empirical reference of its own, so that it serves only as a convenient place-holder in a longer argument that engages observable reality at some distant point.  Friedman's position was that the notion of "profit maximization" in economic theory was a theoretical term of this kind: what the theory says, per Friedman, is that firms behave as if they maximize profits.  Hence, mounting an effort to examine firm decision making at close range is simply misguided (as economic science), because economic theory makes no real prediction as to what you should expect to find.  Friedman suggested that other processes--such as "natural selection" or tacit skill--might create the observable consequences of profit maximization.3  This could be happening even if the maximization itself--in the sense of clear objectives, explicit calculation and careful comparison of alternatives--were not only unobservable, but absent.  He also expressed skepticism about the possibility of discovering how business decisions are made through observation or interviews, suggesting that respondents might dissemble in some way or perhaps were actually not consciously aware of the mental processes involved (the tacit skill point).  For example,

the billiard player, if asked how he decides where to hit the ball, may say that he "just figures it out but then also rubs a rabbit's foot just to make sure; and the businessman may well say that he prices at average cost, with of course some minor deviations when the market makes it necessary.  The one statement is about as helpful as the other, and neither is a relevant test of the associated (maximization) hypothesis.  (Friedman, 1953: 22)

This skepticism about the value of direct observation of firms is by no means peculiar to Friedman, or to those who are explicitly committed to something like his methodological outlook.  It remains a broadly held attitude in the economics discipline, though perhaps not so broadly as when Friedman wrote.  Anyone who undertakes a direct approach to studying firm behavior is sure to encounter it, sooner rather than later, when discussing the project with economists.4  To be clear, there certainly is merit in warning against the possibility that respondents are dissembling, or reporting socially approved motivations and procedures, or exercising tacit skills that they cannot explicate effectively.  These points are familiar and accepted in social science research, and for that matter are widely relevant in everyday life.  What is distinctive about the response often encountered from economists is its extreme and unqualified nature.  Instead of being the beginning of a discussion of how likely it actually is, given the actual context, that the results are tainted in these ways, it tends to be offered as the end of the discussion--both for the present and for the foreseeable future.

The methodological issues surrounding profit maximization have rough parallels in others sciences.  The case of the neutrino is a classic of the type.  When originally proposed, the new particle appeared to be nothing more than an ex post adjustment to prevailing physical theory to protect it from apparently disconfirming observations.  Even the proposer, Wolfgang Pauli, referred to the proposal as a "desperate expedient."  As a patch to the theory, the neutrino seemed to have the disturbing property that it was apparently impossible to check its validity, since the assumed properties of zero mass and zero charge posed a major obstacle to observation.  Thus, paralleling the case of "as if" profit maximization, the proposed patch was put forward in a context of cogent reasoning as to why it was impossible to check on its validity.  Physicists and philosophers debated the legitimacy of the neutrino patch for some decades--after which the question faded, as first indirect and then relatively direct confirming evidence was developed.


1    An argument that Friedman's evolutionary insights should imply reconstruction was actually made by Tjalling Koopmans, a much-admired mathematical economist who was a professor of mine at Yale (Koopmans, 1957: 140-141).  I do not recall reading that passage in Koopmans before I read Alchian--but I might not have reacted, even if I did.

2    A good example is Gordon (1948), which cites a lot of the other relevant work.

3    Friedman did not use the terminology of "tacit skill," but it seems fully appropriate in retrospect.

4    For a recent example, see Truman Bewley's discussion of these attitudes, which he encountered in connection with his interview-based study of why firms don't cut wages in recession (Bewley, 1999: esp. 8-16).  More generally, see also Schwartz (1998).


PG. #518 WINTER 24.3.2 THE FRIEDMAN CONJECTURE Theoretical analysis of the Friedman conjecture is one such approach.  Essentially, the question is whether money will be left on the table in the long run if it is being pursued by profit-seeking firms with plausible, though typically not optimal, policies.  In its basic form, such analysis first posits a situation in which it is logically possible for business firms to get the right answers to their decision problems, for at least there is a right answer.  (Without this very substantial assumption, the Friedman conjecture is dead on arrival as a matter of strict logic.)  The second constituent of the analysis is some postulated set of possible behavior patters for firms, such that at least some of these patterns are not comprehensively optimal.  That is, contrary to the standard assumptions of economics, not all firms are necessarily getting the right answer all the time.  (Without this assumption, the conclusion "firms maximize profits" is the trivial result of the familiar postulate, requiring no evolutionary logic or process to establish it.)  The final constituent is a characterization of the dynamic process by which firms interact competitively, determining their survival and growth.  With the details of a hypothetical context thus specified, the problem of such analysis is to characterize how the dynamic process turns out, and whether this outcome is consistent with Friedman's conjecture of "as if" profit maximization.

PG. #520 & 521 WINTER 24.3.3 BEHAVIORALISM At the time, I was beginning my dissertation research, the "Carnegie School" was reaching an advanced stage of development in Pittsburgh.  Herbert Simon's famous article on satisficing, "A Behavioral Model of Rational Choice," had appeared in 1955 (Simon, 1955), and I had had the good fortune to encounter it in graduate school.11  The classic Organizations volume by Simon and James March appeared in 1958 (March and Simon, 1958).  Much of the research that in 1963 appeared as the Richard Cyert and James March book, A Behavioral Theory of the Firm (Cyert and March, 1963), was under way and was beginning to appear in working paper form.  What the Carnegie scholars had to say about firm behavior was partly familiar, being in some ways parallel to what had been said earlier by the economists who criticized the orthodoxy in the theory of the firm.  These were the very critics to whom Friedman responded in his essay, and I was well aware of their work.  In retrospect, it may appear that even at that early stage there was an evident opportunity to use an evolutionary approach to build on and complement the "micro-foundations" of firm behavior contributed by the Carnegie School.

In fact, that did not happen--at the time.  There was some cross-fertilization, and some sense of encouragement (at least in the Carnegie-to-Winter direction), but not much.  The "behavioral theory of the firm" was not easy to absorb, especially in its unfinished form.  It involved novel theory, novel research techniques (especially computer simulation) and novel-seeming blind spots (especially, an apparent indifference to the role of markets as understood by economists).

When the Cyert and March book appeared in 1963, I was invited to review it for the American Economic Review (Winter, 1964b).  In the course of reading the book and preparing the review, I was able to see the Carnegie work as a program for the first time--and to see it as complementary to the evolutionary approach, as suggested above.  My review noted that the authors seemed content to regard firm behavior as a significant scientific problem in its own right, and willing therefore to set aside the task of predicting market phenomena--and suggested that this should not be the permanent state of affairs: Also, it is to be hoped that someone will eventually accept the challenge of attempting to provide a better definition of the relationship between the behavioral theory and the traditional theory than is provided by the assertion that the two theories are concerned with different problems...

...the consistency of the behavioral theory with the more persuasive portion of the empirical evidence for the traditional theory has yet to be determined.  Investigation of the relationship between the two theories will probably involve closer attention to the circumstances that determine when the profit goal is evoked and when profit aspirations adjust upward, as well as to the ways in which competition may force an approach to profit maximization by firms whose decision processes are governed in the short run by crude rule-of-thumb decision rules.  (Winter, 1964b: 147; emphasis in original)

Although it was not fully spelled out in my review, any more than in the book itself, I could see that the Cyert and March book suggested the possibility of a new division of scientific labor.  Firm behavior could be regarded as a subject matter in its own right, which on the face of it appeared to involve aspects appropriately studied in psychology, sociology, organizational behavior, engineering, operations research, management, finance, accounting, marketing, and perhaps other disciplines as well, in addition to economics.  The primary role of economics was not to strive for imperial control over these other intellectual domains, and certainly not to ignore them, but to point out the systemic and long-run implications of whatever firm-level truths might be brought forward, from whatever source.  This role is especially suitable for economists insofar as those implications are largely the result of firms interacting through markets.  At the same time, operations research  and the business-oriented disciplines might reasonably concern themselves (at least in part) with how existing modes of business behavior might realistically be improved--and that, too, is not the central role of economics.  This vision of the appropriate division of labor represents my present view.


11    I doubt that Simon's article ever made an appearance on many reading lists for economics courses, and certainly not by 1957.  But it was on the list for Jacob Marschak's seminar on Economics of Information and Organization, which I took at Yale in that year.  Even the title of Marschak's seminar now seems quite remarkable, given the date.


PG. #541 & 542 WINTER 24.7 EVOLUTIONARY ECONOMICS AND MANAGEMENT Economics needs to take large firms very seriously because of their major influence on the system as a whole.  Taking large firms seriously means taking managers seriously, because managers make real choices under real uncertainty.  In organizational economics, there are valiant efforts to take managers seriously within the familiar frame of  rational choice modeling (Gibbons, 2003).  Such efforts, while capable of generating useful insight at the micro level, have limited power to address the evolution of the context, capturing the larger scale interactions in the system.  For that purpose, the familiar story of profit maximizing firms and (even) competitive markets provides the backdrop for the analysis, as it does elsewhere in the discipline, for want of anything better (or so it is claimed).

In management, the need to take managers seriously does not require an argument, and is not limited to accounting for the influence of large firms.  A possibly more serious question is, does management need to take economics seriously?  While a lot of useful work under the broad rubric of management probably does not need to take economics seriously, there are areas where economic principles are fundamental to the problems addressed.  Strategic management is the obvious case.  Like mainstream economics, evolutionary theory illuminates the workings of competition in the marketplace, through which firms influence each others' profitability as well as their prospects for growth and survival.  Unlike mainstream economics, its illumination of those "workings" falls directly on the dynamic processes of competition, and not just on equilibrium outcomes or tendencies.  Also unlike mainstream economics, its image of a population of firms is an image of heterogeneous firms, differing in their ways of doing things and also in size--with the size differences produced endogenously as a consequence of those idiosyncrasies.

Indeed, thanks to the complementary theoretical work in organizational learning and the partial filling of the major gap concerning industry evolution, it should now be within reach to produce a comprehensive model of the creation and evolution of an industry--a sort of "Big Bang" model for an industrial universe.  Such a model would map the entry processes, the learning processes, the market competition processes, the differential growth and survival, and the appearance of concentrated structure--all within a frame that represented and controlled the key exogenous forces and structural determinants, but none of the details.  It could even extend to the significant problems relating to the determination of industry and firm boundaries, since evolutionary forces are at work there as well (Langlois, 1991; Jacobides and Winter, 2005).  Such a model would rest on a layered structure of theoretical commitments about key processes--commitments that have already been identified and debated, and of course can be debated further.  Implemented as a simulation model, it would produce a realistic picture of an industry that responded in systematic ways to differences in the exogenous conditions.  It might misrepresent reality, not merely because of the necessarily abstract character of theory, but because it failed to capture significant patterns in the reality.  And if it did misrepresent reality in significant ways, that discrepancy would be ascertainable.  In short, it would have content.

Most fundamentally from the viewpoint of management theory, evolutionary theory invites detailed attention to individual firms and the problems they face in dealing with competitive environments.24  It does not merely accept, but urges, that inquiry extend to the inner workings of firms.  It offers the investigator suggestions about what to look for--especially if the inquiry is one that includes a concern with how that firm fits and fares in the larger system.  It also urges, however, that an open mind about the nature of decision processes found in firms will prove more useful than a closed one.


24    In this connection, see Gavetti and Levinthal (forthcoming) for an encouraging assessment.




An Evolutionary Approach to Institutions and Social Construction: Process and Structure

LYNN G. ZUCKER and MICHAEL R. DARBY

 

PG. #566 & 568 ZUCKER & DARBY 25.6 SOME FURTHER THOUGHTS ON INSTITUTIONAL THEORY A theory is in many ways like a living tree.  It grows according to where the nutrients and sun are the best, and in the process sometimes grows odd-looking branches and may be quite unbalanced in its growth in the sense that one side of the tree grows more than the other.  Many people work at developing a theory, and not all use the same approach.  One's own ideas change over time, as well.

A rewarding part of theory construction is the flash of insight one gets from putting various pieces of mosaic or puzzle together, when you see relationships among concepts and measures that were not open to you before.  Figure 25.6 outlines one conception of how theory is developed, a conception that feels right to us as we have tried to make tacit ideas about institutional process and structure more explicit.  At the bottom of the figure, we briefly outline the basics for the process of explicating the theory so that it can be more easily applied and further developed by others.

Like a giant sequoia (or an organization), a theory can live a long time, and every once in a while it is a good idea to prune the branches and perhaps clear out some of the surrounding growth that obstructs light or takes nutrients.  Constructing theory is much like social construction: it is inherently a social process, and also often has significant tacit components.  The most difficult work a theorist does is to codify some of the tacit components, but this work can also be very rewarding, since implications of the theory, like the flash of insight mentioned above, suddenly become visible and codified in a way that makes them more accessible to you, as well as to others (Cohen, 1988; Berger, et al., 1962).  It is at this point in the process, the formalization and codification of the theory and not the early, more tacit development, where the normative accounts of theory construction best hold.

Theories, or more commonly empirical generalizations or theoretical approaches, sometimes take hold in a way that makes it difficult to take negative evidence into account.  Examples abound, with perhaps the most famous the Pygmalion Effect (Rosenthal and Jacobson, 1968).6  Thus, a more systematic approach to determining confirmation status of a theory is essential.  Institutional theory is past its adolescence (Scott, 1987), and ready for more systematic formalization and test.


6    This is contested terrain with strong critiques of Rosenthal and Jacobson's research being followed by research that disconfirmed their findings on expectation effects, followed by a flood of research that provided confirming evidence some of the time and disconfirming evidence relatively less often.  For a summary of the controversy and findings, see Miller and Turnbull (1986).




Epilogue: Learning to Develop Theory from the Masters

KEN G. SMITH and MICHAEL A. HITT

 

PG. #573 & 574 SMITH & HITT 26.1.1 TENSION/PHENOMENA The starting point for many of our scholars was a conflict or dissonance between the scholars' firmly embedded viewpoint about management, organizations, and nature of the world, and an observation of phenomena that contradicted, this viewpoint.  These phenomena included contradictory research findings, faulty assumptions in an existing line of research or business behavior, or events that required additional or even a different explanation.  Generally, these conflicts created tension for the scholars, which motivated them to resolve the tension.  Hambrick notes, "My sense is that those who have a knack for developing theories are astute observers of phenomena; they detect puzzles in those phenomena; and they then start thinking about ways to solve the puzzles...puzzles trigger theory development."

PG. #575 & 576 SMITH & HITT 26.1.2 SEARCH Levitt and March (1988) suggest that search is motivated to solve problems.  Tension and dissonance led our masters to search for potential answers in order to reduce or eliminate the tension they experience.  The answers in this case involve the initial framework of their original theory.  We label this phase "search" because there had to be exploration and discovery to develop the framework of the proposed theory.  That is, the new theory is a consequence of the tension and search for answers.

Interestingly, our scholars were not highly explicit about the search processes or search behaviors they used other than to recognize that the search process occurred.  Bandura notes, "Discontent with adequacy of existing theoretical explanations provides the impetus to search for conceptual schemes that can offer better explanations."  Vroom describes how he was "searching for a dissertation topic" when he obtained an insight for expectancy theory.  Rousseau describes the search process: "Observe and listen to people in the workplace, do lots of reading, and talk with other colleagues to figure out the way forward."  Mintzberg argues, "We get interesting theory when we let go of all this scientific correctness, or to use a famous phrase, suspend our beliefs, and allow our minds to roam freely and creatively."  We suspect that the search process if not independent of the tension that created it.  They likely occur almost simultaneously and the tension continues until the framework for the new theory is developed.  In fact, some tension is likely to exist until others in the field embrace the new theory.  That said, we infer different patterns of search based on career paths of our scholars, and the colleagues with whom they interacted.  Thus, their career orientations and their collegial relationships interacted with their individual training and experiences (knowledge stocks) to produce the new theory.

PG. #578 & 579 SMITH & HITT 26.1.3 ELABORATION/RESEARCH The process by which scholars research and expand their ideas characterizes the elaboration stage of theory development.  The process of elaboration is broadly described by our authors as detective work, induction, sensemaking, and research.  Weick describes this stage of theory development as a:

sprawling collection of ongoing interpretive actions.  To define this "sprawl" is to walk a thin line between trying to put plausible boundaries around a diverse set of actions that seem to cohere while also trying to include enough properties so that the coherence is seen as distinctive and significant but something less than the totality of the human condition.

Bandura also captures this part of the process: "Initial formulations prompt lines of experimentation that help improve the theory.  Successive theoretical refinements bring one closer to understanding the phenomena of interest."  Oldham and Hackman note:

We suspect that no theory, and certainly not ours, emerges all at once in a flash of insight.  Instead, theory development can seem as if it is an endless iterative process, moving back and forth between choice of variables and specification of the links among them, hoping that eventually the small, grudgingly achieved advances will outnumber the forced retreats.

Locke and Latham are more specific in discussions of their means of elaboration in their goal setting research:

by doing many experiments over a long period of time, by showing that our experiments worked and thereby getting other researchers interested in goal setting research, by coming at the subject of goal setting from many different angles, by examining failures and trying to identify their causes, by resolving contractions and paradoxes, by integrating valid ideas from other developing theories, by responding to criticism that seemed to have merit and refuting those that did not, by asking ourselves critical questions and by keeping an open mind.

Zucker and Darby use the metaphor of a growing tree to portray the process of theory development: "It grows according to where the nutrients and sun are best, and in the process sometimes grows odd-looking branches and may be quite unbalanced in its growth in the sense that one side of the tree grows more than the other.  Many people work at developing a theory, and not all use the same approach".  Rousseau suggests that three specific mechanisms (four distinct sets of actions) helped her elaborate psychological contract theory: spending time in organizations, writing two books, and producing a series of research projects.  In some cases, this process of elaboration is of a shorter-term nature and in others it is a career-long endeavor.  For the most part, elaboration involves a rather long period of time, although not necessarily a whole career.

PG. #581 & 582 SMITH & HITT 26.1.4 PROCLAMATION/PRESENTATION The final phase of theory development is presenting the model and research to the various and appropriate constituencies.  Although the presentation of one's ideas or theory might seem relatively straightforward, our scholars generally struggled to get their new ideas accepted, especially in the top academic journals.  Perhaps, because their ideas were new or the theory too encompassing, several of our scholars had to write a book to present their works.

The proclamation of the theories can occur in many ways, but two alternatives are more common.  First, there can be a series of both conceptual and empirical articles that often incrementally build on each other or independently add to the theoretical knowledge.  Usually after the quantity of this work passes a critical threshold, it is summarized in a book in order to create a "gestalt" framework and to enhance the coherence of the theory.  For example, Locke and Latham summarized over twenty-five years of studies in their 1990 book on goal setting.  Similarly, Finkelstein and Hambrick (1996) summarized and elaborated ten years of research in their book, Strategic Leadership.  Beach and Mitchell (1996) published a number of papers on image theory and summarized this work in an edited volume on Image Theory.

 




Where did the giant Harvard economist John Kenneth Galbraith fail?

And presumably some people did remember that McLandress was himself a figment of the imagination.
In the case of John Kenneth Galbraith, who died last week, the
Times obituary could scarcely fail to register the man’s prominence. He was an economist, diplomat, Harvard professor, and advisor to JFK. Royalties on his book The Affluent Society (1958) guaranteed that — as a joke of the day had it — he was a full member. But the notice also made a point of emphasizing that his reputation was in decline. Venturing with uncertain steps into a characterization of his economic thought, the obituary treated Galbraith as kind of fossil from some distant era, back when Keynsian liberals still roamed the earth.
Scott McLemee, "Wheat and Chaff," Inside Higher Ed, May 4, 2006 --- http://www.insidehighered.com/views/2006/05/03/mclemee

He was patrician in manner, but an acid-tongued critic of what he once called “the sophisticated and derivative world of the Eastern seaboard.” He was convinced that for a society to be not merely affluent but livable (an important distinction now all but lost) it had to put more political and economic power in the hands of people who exercised very little of it. It was always fascinating to watch him debate William F. Buckley — encounters too suave to call blood sport, but certainly among the memorable moments on public television during the pre-"Yanni at the Acropolis” era. He called Buckley the ideal debating partner: “pleasant, quick in response, invulnerable to insult, and invariably wrong.”

Galbraith’s influence was once strong enough to inspire Congressional hearings to discuss the implications of his book The New Industrial State (1967). Clearly that stature has waned. But Paul Samuelson was on to something when he wrote, “Ken Galbraith, like Thorstein Veblen, will be remembered and read when most of us Nobel Laureates will be buried in footnotes down in dusty library stacks.”

The reference to the author of The Theory of the Leisure Class is very apropos, for a number of reasons. Veblen’s economic thought left a deep mark on Galbraith. That topic has been explored at length by experts, and I dare not bluff it here. But the affinity between them went deeper than the conceptual. Both men grew up in rural areas among ethnic groups that never felt the slightest inferiority vis-a-vis the local establishment. Veblen was a second-generation Norwegian immigrant in Wisconsin. Galbraith, whose family settled in a small town in Canada, absorbed the Scotch principle that it was misplaced politeness not to let a fool know what you thought of him. “Better that he be aware of his reputation,” as Galbraith later wrote, “for this would encourage reticence, which goes well with stupidity.”

Like Veblen, he had a knack for translating satirical intuitions into social-scientific form. But Galbraith also worked the other way around. He could parody the research done by “the best and the brightest,” writing sardonically about what was really at stake in their work.

I’m thinking, in particular, of The McLandress Dimension (1963), a volume that has not received its due. The Times calls it a novel, which only proves that neither of the two obituary writers had read the book. And it gets just two mentions, in passing, in Richard Parker’s otherwise exhaustive biography John Kenneth Galbraith: His Life, His Politics, His Economics (Farrar, Straus, and Giroux, 2005).

. . .

Writing from behind his persona, Galbraith turned in a credible impression of social-science punditry at its most pompous. (You can read the entire review here.) It must have been very funny if you knew what was going on. And presumably some people did remember that McLandress was himself a figment of the imagination.

But not everyone did. Over time, Report from Iron Mountain became required reading for conspiracy theorists — who, by the 1990s, were quite sure it was a blueprint for the New World Order. After all, hadn’t a reviewer vouched for its authenticity in The Washington Post?

And what did Galbraith think of all this? I have to.... One has to wonder.

 *****************************

It is fortunate for Professor Galbraith that he was born with singular gifts as a writer. It is a pity he hasn't used these skills in other ways than to try year after year to bail out his sinking ships.
William F. Buckley, "John Kenneth Galbraith, R.I.P. ," Townhall, May 2, 2006 --- http://www.townhall.com/opinion/columns/wfbuckley/2006/05/02/196026.html

The public Galbraith I knew and contended with for many years is captured in the opening paragraphs of my review of his last book, "The Culture of Contentment." I wrote then: ` "It is fortunate for Professor Galbraith that he was born with singular gifts as a writer. It is a pity he hasn't used these skills in other ways than to try year after year to bail out his sinking ships. Granted, one can take satisfaction from his anti-historical exertions, and wholesome pleasure from his yeomanry as a sump-pumper. Indeed, his rhythm and grace recall the skills we remember having been developed by Ben-Hur, the model galley slave, whose only request of the quartermaster was that he be allowed every month to move to the other side of the boat, to ensure a parallel development in the musculature of his arms and legs.

"I for one hope that the next time a nation experimenting with socialism or communism fails, which will happen the next time a nation experiments with socialism or communism, Ken Galbraith will feel the need to explain what happened. It's great fun to read. It helps, of course, to suppress wistful thought about those who endured, or died trying, the passage toward collective living to which Professor Galbraith has beckoned us for over 40 years."

So it is said, for the record; and yet we grieve, those of us who knew him. We looked to his writings for the work of a penetrating mind who turned his talent to the service of his ideals. This involved waging war against men and women who had, under capitalism, made strides in the practice of industry and in promoting the common good. Galbraith denied them the tribute to which they were entitled.

When they went further and offered their intellectual insights, Galbraith was unforgiving. His appraisal of intellectual dissenters from his ideas of the common good derived from the psaltery of his moral catechism, cataloguing the persistence of poverty, the awful taste of the successful classes, and the wastefulness of the corporate and military establishments.

Where Mr. Galbraith is not easily excusable is in his search for disingenuousness in such as Charles Murray, a meticulous scholar of liberal background, whose "Losing Ground" is among the social landmarks of the postwar era. "In the mid 1980s," Galbraith writes, "the requisite doctrine needed by the culture of contentment to justify their policies became available. Dr. Charles A. Murray provided the nearly perfect prescription. ... Its essence was that the poor are impoverished and are kept in poverty by the public measures, particularly the welfare payments, that are meant to rescue them from their plight." Whatever qualifications Murray made, "the basic purpose of his argument would be served. The poor would be off the conscience of the comfortable, and, a point of greater importance, off the federal budget and tax system."

One needs to brush this aside and dwell on the private life of John Kenneth Galbraith. I know something of that life, and of the lengths to which he went in utter privacy to help those in need. He was a truly generous friend. The mighty engine of his intelligence could be marshaled to serve the needs of individual students, students manque, people who had a problem.

Two or three weeks ago he sent me a copy of a poll taken among academic economists. He was voted the third most influential economist of the 20th century, after Keynes and Schumpeter. I think that ranking tells us more about the economics profession than we have any grounds to celebrate, but that isn't the point I made in acknowledging his letter. I had just received a book about the new prime minister of Canada, Stephen Harper, in which National Review and its founder are cited as the primary influences in his own development as a conservative leader. But I did not mention this to Galbraith either. He was ailing, and this old adversary kept from him loose combative data that would have vexed him.

I was one of the speakers at his huge 85th birthday party. My talk was interrupted halfway through by the master of ceremonies. "Is there a doctor in the house?" The next day I sent Galbraith the text of my talk. He wrote back: "Dear Bill: That was a very pleasant talk you gave about me. If I had known it would be so, I would not have instructed my friend to pretend, in the middle of your speech, to need the attention of a doctor."

Forget the whole thing, the getting and spending, and the Nobel Prize nominations, and the economists' tributes. What cannot be forgotten by those exposed to them are the amiable, generous, witty interventions of this man, with his singular wife and three remarkable sons, and that is why there are among his friends those who weep that he is now gone.

I was fortunate to attend one of the famous liberal vs. conservative debates years ago on the Trinity University campus between William F. Buckley and John Kenneth Galbraith. These were classic in terms of razor sharp wit directed at each other with underlying deep respect for the person if not the position.

**********************

"The New Industrial Economist," by David R. Henderson, The Wall Street Journal, May 2, 2006, Page A16 --- http://online.wsj.com/article/SB114652913131041003.html?mod=opinion&ojcontent=otep

John Kenneth Galbraith, one of America's most famous economists, died on Saturday at the age of 97. His fame came not from his technical accomplishments in academic economics but from his awesome writing ability, evidenced in 33 books and many more articles. He wrote almost all of his books -- certainly the ones that increased his fame -- for a general audience. He honed his writing ability while on the board of editors of Fortune magazine from 1943 to 1948. After that, he never stopped.

. . .

He once remarked, at his wittiest and most on-target, that "In the choice between changing one's mind and proving there's no need to do so, most people get busy on the proof." Nevertheless, while mainstream economists were sometimes a little nasty in debating Galbraith, they did point out fundamental problems with his conclusions -- problems that he never seriously grappled with. Galbraith focused too much on the witty epigram. As one critic pointed out, his main form of argument for key assumptions in his model of the economy was "vigorous assertion."

Galbraith's three most important books, measured by sales and influence on popular thinking, were "American Capitalism: The Concept of Countervailing Power" (1952), "The Affluent Society" (1958) and "The New Industrial State" (1967). In "American Capitalism," Galbraith argued that giant firms had replaced small ones to the point where the "perfectly competitive" model no longer applied to much of the American economy. But not to worry, he argued. The power of large firms was offset by the countervailing power of large unions, so that consumers were protected by competing centers of power.

The late Nobel laureate George Stigler gave a pointed response in 1954. Stigler noted that before Roosevelt's cartel-forming National Recovery Administration started giving monopoly power to large businesses, in five of the six industries with the most powerful unions -- building trades, coal mining, printing, clothing and musicians -- there were many small firms rather than, as Galbraith's theory would have predicted, a few large ones. Moreover, noted Stigler, even if powerful labor unions offset the power of large firms, there was no assurance that this would help consumers -- now not only the firms but also the unions would have a desire to limit output and keep prices high and would simply be fighting over the monopoly rents.

In "The Affluent Society," Galbraith contrasted the affluence of the private sector with the "squalor" of the public sector, writing, "our houses are generally clean and our streets generally filthy." He attributed this to our failure to give the government enough of our resources to do its job. He appears never to have considered the more straightforward economic explanation for dirty streets -- one that is based on incentives. The model that applies to the streets is "the tragedy of the commons": No one owns the streets and, therefore, no one has an incentive to take care of them.

Many people liked "The Affluent Society" because of their view that Galbraith, like Thorstein Veblen before him, attacked production that was geared to "conspicuous consumption." But that is not in fact what Galbraith did. He argued, rather, that "an admirable case can still be made" for satisfying even consumer wants that "have bizarre, frivolous or even immoral origins." His argument against satisfying all consumer demands was more subtle than Veblen's. Galbraith wrote: "If the individual's wants are to be urgent, they must be original with himself. They cannot be urgent if they must be contrived for him. And above all, they must not be contrived by the process of production by which they are satisfied. . . . One cannot defend production as satisfying wants if that production creates the wants."

Really? The late Friedrich Hayek, co-winner of the 1974 Nobel Prize in economics, delivered the most fundamental critique of Galbraith's thesis. Hayek conceded that most wants do not originate with the individual; our innate wants, he wrote, "are probably confined to food, shelter and sex." All other wants we learn from what we see around us. Probably all our aesthetic feelings -- our enjoyment of music and literature, for example -- are learned. So, wrote Hayek, "to say that a desire is not important because it is not innate is to say that the whole cultural achievement of man is not important." Hayek could have taken the point further. Few of us, for example, have an innate desire for penicillin. It had to be first produced and then advertised before doctors could know about it. And it's safe to say that we've found it very valuable.

Galbraith's magnum opus was "The New Industrial State," in which he argued that large firms dominate the American economy. "The mature corporation," he wrote, "had readily at hand the means for controlling the prices at which it sells as well as those at which it buys. . . . Since General Motors produces some half of all the automobiles, its designs do not reflect the current mode, but are the current mode. The proper shape of an automobile, for most people, will be what the automobile makers decree the current shape to be." Well, no. Of course, GM failed to "decree" the shape of automobiles in the 1980s and continues to fail today, leading to huge losses of both money and market share. It seems consumers, whom Galbraith regarded as manipulable by Detroit and Madison Avenue, somehow didn't accept GM's "decree."

To his credit, Galbraith admitted some of this. In July 1982, the steel and auto companies he had claimed were immune from competition and recessions were laying off workers in response both to foreign competition and recession. Asked on "Meet the Press" whether he had underestimated the extent of risk that even large corporations face, he paused and replied, "Yeah, I think I did."

Continued in article




Leaders and the Responsibility of Power

"The Anxiety of Influence," by Josiah Bunting III, The Wall Street Journal, July 3, 2006; Page A10 --- http://online.wsj.com/article/SB115188105388196624.html?mod=opinion&ojcontent=otep

More than 40 years ago the historian Henry Steele Commager asked how it was that the British colonies in North America could have produced such a galaxy of leaders: a generation that made a revolution and established a new and enduring nation. In talent, he argued, the leadership rivaled that of the Athens of Pericles and the England of Elizabeth I, a florescence of wisdom, character, virtue and vision that has not since been equaled. The question has never been -- and never will be -- satisfactorily answered; each generation is obliged to engage it in its own way.

Commager adduced several reasons, most of them familiar: "New occasions teach new duties," as James Russell Lowell wrote. Great challenge evokes mighty response. The places of honor, of ambition realized, were almost all to be found in the ranks of those preparing the Revolution or fighting in the continental army or designing, and making, a new government. There were few fortunes to be made, few industries, universities, institutions of culture to lead. And talent seemed much less divisible than in 20th-century America; that is, of necessity a new beau ideal of leadership had come into being: The patriot saw no necessary tension between being a scholar, soldier, writer, legislator, leader.

Like the heroes of the early Roman Republic and ancient Greece (Rome more than Greece) whom they emulated, these Americans discharged their obligations, as they understood them, by answering multiple vocations and duties, all serving a common end. They did not particularly count the cost. They were not concerned to lay up fortunes for themselves. They had small conception of what our own age calls (and is obsessed by) "stress." They were educated in the classics of ancient literature, history particularly, and in the philosophical literature of 17th- and 18th-century Europe -- Locke, Sidney, Montesquieu, Hume. Many did not attend college: There were only nine universities and colleges by the end of 1776.

Yet they wrote with a grace and lucidity we cannot match. Their minds seemed clearer than ours. And they had also what was imputed to a great general of a later generation: the imaginations of engineers. They knew how to transform ideas into action, into policies and institutions.

When they were young, these leaders of the revolutionary generation accustomed themselves, under the supervision of demanding adults, to long periods of solitary study. Their English near-contemporary, William Wordsworth, remembered a statue of Isaac Newton in the courtyard of his Cambridge college: "the marbled index of a mind voyaging forever, through strange seas of thought, alone." As young people, they were not often praised or rewarded. The satisfactions of learning, they were taught, were in the learning, and in how that learning -- like the unconscious predisposition to emulate certain heroes -- might somehow be transmuted into examples and lessons that would influence their own conduct later on. Like Pericles's men of Athens, they would thereafter "be ashamed to fall below a certain standard."

* * *

The English historian Paul Johnson wrote that the generation of American leaders of the 1940s was our ablest since that of the Founding. No one imputes to this second generation the creative genius of the first. Several of its tribunes were professional soldiers and naval officers -- men ideally (according to Clausewitz) of a searching rather than a creative intellect. One has the impression, studying their lives as youngsters, that they were not "brilliant" at school.

They were born between 1875 and 1890: they included FDR, Douglas MacArthur, George Marshall, Ernest King, Dwight D. Eisenhower, Harry Truman, Omar Bradley, William Halsey, Chester Nimitz. With the exceptions of MacArthur and Roosevelt, they were children of the American heartland, all born into modest, even hardscrabble circumstances. The service academies were literally their way out of Dodge.

For the military men, Marshall and MacArthur in particular, enormous responsibility was given them as lieutenants in their early 20s in the Philippines. Self-reliance is usually the consequence, and so is what David Riesman called, in 1950, "inner-directedness": the predisposition to act on judgment and conscience rather than calculations of external approval. Thus Harry Truman, almost blind without glasses, insisted on leading his artillery battery in the most severe, and final, campaign of the Great War -- the Meuse Argonne offensive (in which American casualties were 126,000, including 26,000 killed, in six weeks). More than 20 years later, Truman, then a U.S. senator, sought service again, but George Marshall turned him down: Truman was then in his mid-50s.

Nimitz, Bradley, Eisenhower -- towers of moral strength, settled wisdom, common sense of an elemental, singularly American kind: and all, like the 16 millions who served in World War II (more than 10% of the country's 1945 population) with the innate modesty which remains above all others the quality which draws Americans of 2006 to this Greatest Generation. Such people embodied the virtues, including the un-self-conscious nobility the founding generation admired in their ancient models.

* * *

In a phrase that recurs so often that it has almost become a cliché, we read that Bradley, or Ike, or Halsey "was a mediocre student at the Academy." Yet, in a confluence of character, conscience and mind that we cannot disentangle, they considered problems of enormous complexity, took counsel of those they admired, attained wise and useful decisions, and inspired and led huge numbers of servicemen -- and women -- to complete their missions. In our time of crisis will another generation bring forward men and women of the same métier as those of the Revolutionary and Second World Wars?

The answer expected is a red-blooded Of Course We Will! To suggest anything less is to invite the imputation of cynicism. But the culture of palliatives -- in which virtually all minor encumbrances of imperfect health, physical and psychological, can be erased by drugs, in which most avenues of advancement rely less on the actions of self-reliance than upon the legions of aids (human and material) that are gathered to smooth their way, and in which the ends to be pursued and the ambitions to be gratified are usually (though not always) those that exclude useful service to the nation -- this is not a culture that cultivates the qualities most needed.

Consider the character of George Marshall, leader of the American Army from 1939 to 1945, whose name, President Truman insisted, be given the Plan for European Recovery in 1947. A small episode, early in Marshall's final retirement, is illustrative. He was offered very large sums of money to write his memoirs. He declined instantly. It would not do to call attention to himself. His country, he said, had already compensated him for his service -- and besides, what he would be obliged to write, writing truthfully and accurately, might cause pain to people who had done their best, and who deserved well of their country.

Gen. Bunting, former superintendent of VMI, is president of the H.F. Guggenheim Foundation.




"The Decade in Management Ideas," by Julia Kirby, Harvard Business Review Blog, January 1, 2010 --- Click Here
http://blogs.hbr.org/hbr/hbreditors/2010/01/the_decade_in_management_ideas.html?cm_mmc=npv-_-DAILY_ALERT-_-AWEBER-_-DATE

Tis the season for "year's best" lists — and even, this year, for "decade's best" lists — and who are we to resist the urge? A few of us HBR editors (Gardiner Morse and Steve Prokesch helped especially) took the opportunity to look back on the past ten years of management thinking and are ready to declare our choices for the — well, why not say it — most influential management ideas of the millennium (so far).

  1. Shareholder Value as a Strategy. The notion of producing attractive returns for investors is as old as investing, but this was a decade when the pursuit of shareholder value eclipsed too much else. Increasingly sophisticated tools and metrics for value-based management pushed the consideration of stock price effects deep into operational decision-making, and made sure everything pointed toward bonus day. By 2009, even the man most known for focusing on value was saying it was a dumb idea. "Shareholder value is a result, not a strategy," Jack Welch proclaimed. "Your main constituencies are your employees, your customers and your products."
  2. IT as a Utility. The current mania for cloud computing is the latest step in a long process by which enterprises have dispensed with their proprietary glass houses and begun buying computing capabilities as services. One impetus was the Y2K scare, which forced attention onto those onerous legacy systems as the new millennium dawned.
  3. The Customer Chorus. Through a range of technical and social developments, customers' voices grew louder (whether collectively in ratings systems like Amazon's, or individually through viral kvetches like Dave Carroll's "United Breaks Guitars") and companies found ways to listen. It's a true megatrend: the steps along the way have felt gradual and natural, but collectively they change everything.
  4. Enterprise Risk Management. Sounds crazy right now to say that the last decade was notable for risk management. But especially after 9/11, companies saw the sense of bringing the many and various pockets of it under the same umbrella. Newly empowered chief risk officers looked for trouble spots on a landscape ranging from financial hedging to pirates on the open sea.
  5. The Creative Organization. The decade saw a general revolution in the way many organizations came to view their source of competitive advantage, and a commitment to finding ways to produce creative output more reliably. Even before they embraced "design thinking," managers were encouraging collaboration, drawing on diverse perspectives, and engaging whole workforces in "ideation."
  6. Open Source. Purist geeks will be quick to point out that the term open source and some very substantial achievements came in the late 1990s, but here we pay homage to the spread of that model beyond software code. Was it only in 2001 that Wikipedia was born? And how many things have been wiki'ed since?
  7. Going Private. Cheap debt reignited the LBO scene just as post-Enron reforms created real disincentives to operate as a public company. As the decade wore on, private equity's playbook for turning around businesses was increasingly held up as best-practice management. Now, ideas like, ahem, leveraging up don't seem so wise, but private equity's devotion to strategic focus and demanding governance might endure.
  8. Behavioral Economics. Okay, by now, you're all shouting "that's definitely older than 10 years" and you're right. But talk about a set of ideas whose time has come. In the prior decade, can you remember when someone with Steven Levitt's profile had a breakout bestseller? Or when someone modifying the word economist with "rogue" (or "rock star") could keep a straight face?
  9. High Potentials. Consulting firms and other deeply knowledge-based businesses knew this all along, but in the past decade the rest of the corporate world woke up to the fact that some managers are more equal than others. Formal programs were established to identify, cultivate, and retain "hi-po's". Executive coaching, a perk often provided for the anointed, experienced explosive growth as an industry.
  10. Competing on Analytics. Decades of investment in systems capturing transactions and feedback finally yielded a toolkit for turning all that data into intelligence. Operations research types, long consigned to engineering realms like manufacturing scheduling, got involved in marketing decisions. Managers started learning from experiments that were worthy of the name.
  11. Reverse Innovation. The bigger story here is the maturation of the concept of globalization, particularly with regard to emerging economies. Most big corporations in 2000 saw them primarily as a source of natural resources and, increasingly, cheap labor. Then, as rising employment fueled the development of middle classes, cities in India and China came to represent valuable markets. Now, these non-US consumers are coming to the foreground. Firms like GE and Microsoft are doing R&D in emerging markets, optimizing on those preferences and constraints, and then bringing the results back home.
  12. Sustainability. More than anything, the first ten years of the 21st century will be remembered as the decade that businesses went green — if only in their marketing to a public highly attuned to Al Gore's inconvenient truth. We're not cynical on this point, however. The efforts we see by companies large and small to reduce their carbon footprints and other environmental impacts are sincere and effective, as far as they go. But ten years from now, as we revisit this exercise, forgive us if we declare 2010-2020 to be the decade of sustainability. "The idea was in the air before 2010," we can picture ourselves writing. "But this was the decade when it really took hold."

So there it is: our roundup of the management ideas that shaped the decade. Now, you tell us: Which ones don't belong on this list? And what did we miss?

Early respondents sent in the following replies:

You have missed out on a main idea that developed in the decade - outsourcing. It is much less about cost arbitration and much more about what CK Prahlad called 'R=g' (Resources are global) The structuring of outsourcing had undergone a complete change from merely leasing equipment and contract manufacturing within a limited geographic space into a strategic option to leverage not only cost, but also skill, time etc. I wd call that Outsourcing 2.0. But for outsourcing 2.0 many businesses cd have folded up by now.
SRININ


Such a good point. Thank you, Srinin. Companies' approaches to outsourcing really have evolved quite a bit over the past ten years, and value chains seem to be getting more modular as a result. Have you read John Hagel & John Seely Brown on the topic of "productive friction"? It's an interesting way to think about what you are calling "Outsourcing 2.0"
Julia Kirby


In my opinion you forget to refer the to social web and the enterprise 2.0 also. In fact it's not completely deployed everywhere but in last years it had a great boom and will certainly be a trend for the next decade. Let's see if business can follow the evolution of the web and more important, if they can take the advantage of it! Hope they will!
João Aguiam


I agree that this was the "decade of sustainabiilty" during which most companies got it -- they realized that they needed to pay attention to their their carbon footprints and so on if they wanted to stay in the game. This coming decade, though, will be the decade of sustainability as offensive strategy. We'll see companies that use sustainability strategies to trounce competitors, not just to defend themselves from regulators, energy costs, and bad press.
Gardiner Morse

Jensen Comment
Almost completely overlooked are the innovations (and in some cases disasters)  in financial risk management such as securitizations, CDOs, credit derivatives, etc. Although many of these contracting ideas originated in the 1990s, innovative and often fraudulent applications were invented in the early 21st Century.

Also overlooked is the explosive growth of management of hedge funds used to escape government regulations.

The timeline of derivatives financial instruments applications, frauds, and accounting can be found at
http://faculty.trinity.edu/rjensen/FraudRotten.htm

Bob Jensen's threads on management theories are at
http://faculty.trinity.edu/rjensen/theory/00overview/GreatMinds.htm