What Generational Differences (if any) Impact Learning at Work?

What generational differences should you understand as you think about learning and development? I’ll cover two of them in this post, but please also add your thoughts and comments below as well.

****

In examining generational differences, I’ll admit that by and large, I am a skeptic. Most differences seem overblown and are more likely an impact of age rather than generation (e.g. most people in their 20s tend to be more idealistic, rather than those in their 20s right now being part of the “idealistic generation.”) Nevertheless, social norms do change over time and it’s hard not to see how the Great Depression and World War II shaped the dispositions of a large portion of the population in recognizable ways.

Skepticism about generational differences is fairly easy to voice, but it can be like wielding a machete that chops down the whole jungle without discriminating what plants (ideas) might have some merit. It can close us off from seeking to understand how our culture is shifting, however difficult this might be to pinpoint. This is difficult terrain to examine scientifically, as it requires longitudinal research far beyond the time horizon of most research agendas.

In addition to pinpointing exactly what generational differences exist, it is difficult to know why. Differences are often stated in descriptive terms without any theoretical rationale. It is not hard to understand why the generation that grew-up in the Great Depression would be thriftier, but many differences are presented without any explanation (e.g. “Millennials prefer extrinsic rewards”). These complications are well-understood and tempt one to take a machete to dismiss the entire jungle.

Furthermore, in asking about generational differences and learning and development, if you primarily view the mind as a biological entity, then the question is ludicrous. The biological make-up of our mind wouldn’t evolve in any perceivable way over the course of a few decades. Of course, however, the mind is both a product of our biological heritage and the culture we are enmeshed in. Thus, the question becomes, have cultural values (and technology) shifted in some distinguishable way that impacts how we might engage in learning and development?

It was into this jungle that I went searching for some credible evidence that might shed light on this question.

First, it helps to know what are generally perceived as generational cohorts.

  • Silent Generation (born 1925-1945)
  • Baby Boomers (born 1946-1964)
  • Generation X (born 1965-1979)
  • Millennials (also known as GenMe or GenY, born 1980-1994)
  • iGen (born after ~1995)

Of course, as Jean Twenge (2017) aptly argues in her book iGen, there is no drastic difference between individuals born on either side of these cutoffs. However, individuals born ~10 years apart in these cohorts would have had a different cultural experience.

From the evidence I reviewed (mostly peer-reviewed articles and Jean Twenge’s book iGen, but also data from the Monitoring the Future study of high school seniors that began in 1975), there are two considerations that I could discern to understand generational differences with regard to learning and development at work.

Psychological Safety

By many metrics, there is a greater concern with safety among those in the most recent generational cohort, in particular around avoiding risk. This may be the result of well-intentioned parenting practices that help children and teenagers avoid risky behavior. Twenge (2017) reports that in a nationally representative sample of individuals in 8th and 10th grade, 50% of them in the 1990s agreed with the statement “I like to test myself every now and then by doing something a little risky,” but by 2015 that number had reduced to 40% (p. 153). This, along with reducing number of teenagers getting a driver’s license (around a 15% drop of high school seniors over the last 40 years, Twenge, 2017, p. 26), and parents who always know where their kids are, leads to a portrait of a generation that is more accustomed to being kept safe.

If I had to bet, this is connected (and will continue to be so) with a greater focus on “psychological safety” at work. Psychological safety is defined as “a shared belief that the team is safe for interpersonal risk taking” (Edmondson, 1999). It is measured by asking whether mistakes will be held against you and whether team members are able to bring up problems and tough issues. Amy Edmondson of the Harvard Business School has completed highly-regarded research about psychological safety and how it impacts team learning. In a recent book, I advocated for psychological safety as a means to foster great transparency and organizational learning.

The term also gained more widespread attention after Charles Duhigg published an article in the New York Times Magazine in February of 2016 titled “What Google Learned from its Quest to Build the Perfect Team.” In the article, Duhigg discusses Google’s extensive research of itself to uncover the essentials of great teams. The punch line, as you might guess, is psychological safety.

How can you increase psychological safety at work? In efforts where psychological safety was central to a change initiative, the results have been mixed (Edmonson, 2004). In a comprehensive initiative at Prudential Financial that was aimed at making individuals feel safe to speak up (partly because of prior ethical infractions), Edmondson (2004) concludes, “Psychological safety is not created by telling people to feel safe: it is a byproduct of leadership action and example occurring in the context of doing real work” (p. 11, emphasis in original). Thus, psychological safety is best seen as a means to an end, as a way to facilitate idea sharing in the pursuit of “real work,” not as a pursuit in itself.

Thus, to the extent that you are managing a team, or developing learning programs for those just entering the workforce, you’ll want to have some recognition of psychological safety; not as a central focus, but through modeling some acceptance of what can be learned from mistakes. Of course, this is open to lots of critiques about “coddling a generation” (see Lukianoff & Haidt, 2018), which is fair, but it’s helpful to have a better contextual understanding of subtle generational shifts that may be occurring.

Smartphones, Distractions, and Limitless Content Choices

Central to any understanding of generational differences is how the smartphone impacts one’s development. This is central to Jean Twenge’s thesis in iGen, and owning a smartphone understandably shifts how we experience the world. The iPhone was introduced in 2007 and Twenge cites a 2015 marketing survey that 2 out of 3 U.S. teens own an iPhone (p. 2). If the statistic was that teen smartphone ownership was closer to 100 percent, I wouldn’t doubt it. The question becomes how does smartphone usage impact a generation entering the workforce? A full treatment of this question is beyond the scope of this post, and longitudinal evidence has not been established (for an excellent review, see Wilmer, Sherman, & Chein, 2017), but one obvious influence of the smartphone on our cognition is that it scatters our attention. It can do so exogenously—by text alerts, etc.—but also endogenously, as Wilmer et al. (2017) state:

“Endogenous interruptions occur when the user’s own thoughts drift toward a smartphone-related activity, and thereby evince an otherwise unsolicited drive to begin interacting with the device. These endogenously driven drifts of attention might arise from a desire for more immediate gratification when ongoing goal-directed activities are not perceived as rewarding” (p. 4).

There have been correlational studies that link high “media multitasking” and an ability to sustain attention (for a review see van der Schuur et al., 2015), but no longitudinal studies that can determine causality (to my knowledge). Whether smartphone usage throughout early adolescence has a unique impact during that developmental period compared to impacting individuals of all ages equally remains to be seen.

A potential decline in sustain attention may be evident in the decline of reading. As Twenge (2017) cites, among high school seniors in 1976, 10% of students said they did not read for pleasure the prior year, but by 2015 that number increased to 30% (p. 61). Additionally, throughout the 1970s and 1980s, over 50% of high school seniors “read a book or magazine nearly every day.” That number has steadily declined over the years, and by 2015, now only 16% of high school seniors agree with this statement. As Twenge states, “For a generation raised to click on the next link or scroll to the next page within seconds, books just don’t hold their attention.” As one 12-year old she interviewed states, “I’m not really a big reading person. It’s hard for me to read the same book for such a long time. I just can’t sit still and be superquiet” (p. 61).

Of course, the cause of the decline of reading books would be multifarious, however, the smartphone and our access to limitless content has to be acknowledged. By and large, it is an experience of life where one is less accustomed to focused concentration or engaging in what Cal Newport (2016) calls “deep work,” which he defines as “professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit” (p. 3).

Given the greater difficulties of sustaining attention—a difficulty that may be more pronounced for iGen, all of this ramps up the standards it takes to engage individuals, and suggests the obvious tactic of frequent breaks for any training event as well as setting the friendly ground rule of “being present.” It also suggests taking proactive measures to create less distraction in the workplace (counter to the open-office movement). Given we are already dealing with an internally-driven desire to shift our attention, we can at least make an effort to combat distractions in our environment.

Conclusion

In sum, pinpointing precise generational differences and how they impact work is complex, especially disentangling what are the natural inclinations of age and career stage compared to generation shifts that are unlike anything that has occurred before. In addition, we are dealing with subtle shifts in cultural norms over decades. While I have outlined two things to consider that may be generational, please comment below on your own experiences as well.

References:

Edmondson, A. (1999). Psychological safety and learning in work teams. Administrative Science Quarterly, 44(2), 350-383.

Edmondson, A. (2004). Teaching Note: Safe to Say at Prudential Financial. 5-604-021. Boston, MA: Harvard Business School Publishing.

Lukianoff, G., & Haidt, J. (2018). The coddling of the American mind: How good intentions and bad ideas are setting up a generation for failure. New York: Penguin Press.

Newport, C. (2016). Deep work: Rules for focused success in a distracted world. New York: Grand Central Publishing.

Twenge, J. M. (2017). iGen: Why today’s super-connected kids are growing up less rebellious, more tolerant, less happy—and completely unprepared for adulthood—and what that means for the rest of us. New York: Atria Books.

van der Schuur, W., Baumgartner, S. E., Sumter, S. R., & Valkenburg, P. M. (2015). The consequences of media multitasking for youth: A review. Computers in Human Behavior, 53, 204-215.

Wilmer, H. H., Sherman, L. E., Chein, J. M. (2017). Smartphones and cognition: A review of research exploring the links between mobile technology habits and cognitive functioning. Frontiers in Psychology, 8, 1-16.

Photo Source: geralt/Pixabay

What Does It Mean to Be Authentic?

Bill George, former CEO of Medtronic, often speaks of climbing the corporate ladder early in his career at Honeywell and becoming disillusioned with himself. He mentions wearing cufflinks to try and impress the Board of Directors, but “one day I’m driving home. It’s a beautiful day. I looked in the mirror and I’m miserable. I don’t like the businesses I’m in. I’m not passionate about that but most importantly I don’t like myself” (George, 2009). George was acting inauthentically to impress others and had a personal transformation which led him to switch industries and begin acting in a way that felt more congruent with his “true self.” Notably, in George’s retelling of the transformation, it was sparked by looking in a mirror, which presumably heightens one’s consciousness of the self. George would eventually move on to Medtronic and write several books about authentic leadership (e.g. Craig, George, & Snook, 2015). He would also help develop the course “Authentic Leadership” which is a central course on leadership at Harvard Business School.

While authenticity is gaining traction in the field of leadership, what does it mean to be authentic? Authenticity is about congruence between our deeper values and beliefs (i.e. a “true self”) and our actions. When there is a lack of congruence, this leads to an emotional force that seeks reduction. This posits a scientifically elusive but recognizable concept—the notion that there is an “authentic” or “true self” from which this lack of congruence is being generated (Harter, 2002; Sheldon, 2004; Strohminger, Knobe, & Newman, 2017). While an “authentic self” is recognizable from our everyday experience, it is not a concept incorporated into much of psychology. As Ken Sheldon, Distinguished Professor of psychology at the University of Missouri states, “Although many of us would agree with the folk wisdom that we should try to be ourselves, social-cognitive theories have no way of making sense of this statement” (2004, p. 252).

Thus, the notion of authenticity has a skeptical hill to climb, in particular because many psychological approaches—most obviously social psychology with its emphasis on contextual determinants of behavior—have little ability to “make sense” of authenticity. Nevertheless, the notion of an “authentic self” accords with much of our experience. We can readily recall experiences when we were acting inauthentically—in a way that felt at odds with deeper values of who we believe ourselves to be (see Lenton, Bruder, Slabu, & Sedikides, 2013). We can also recall instances of being our “authentic self,” when we shared openly our opinions and perspectives and felt validated in doing so.

While plausible, however, the notion of an authentic self doesn’t seem like it can coexist as we play various roles in our lives—as an employee, spouse, friend, and family member. If we all play various roles, which one is the “real me?” Although we shift roles based on our context, we can still have a sense of a “true self.” Researchers have found that adolescents come to be concerned with what is their “true self” (Harter, 2002). They have explored this concern by having individuals list attributes of the self in different relational contexts (e.g. with friends, at school, with our mother or father), and then having individuals identify which attributes conflict—for example being “outgoing” with friends but “depressed” when around parents (Harter, 2002). Individuals vary in the number of attributes that conflict, ranging anywhere from 1 to 15 or 20 (Harter, 2002). When faced with these conflicts, Harter explains that some individuals “spontaneously agonized over which of these conflicting attributes represent true-self behavior, and which seemed false” (2002, p. 385). Nevertheless, when there are conflicts among these attributes, individuals can identify “true self” behavior, using descriptions such as “the ‘real me inside,’ ‘saying what you really think,’ ‘expressing your opinion.’ In contrast, false self-behaviors are defined as ‘being phony,’ ‘not stating your true opinion,’ and ‘saying what you think others want to hear” (Harter, Bresnick, Bouchey, & Whitesell, 1997, p. 844).

Ultimately, whether we are a “multiplicity of selves” or have a “true self” calls for a both/and. Rather than a debate over whether we either have a multiplicity of selves or only one true self, we both shift ourselves given the context and have a sense of a true self. Thus, if we have a sense of a true self, we can be more or less authentic to it, and individuals vary in their authenticity. In measuring individual differences in this regard, Wood et al. (2008) ask questions such as “I always stand by what I believe in,” and “I am true to myself in most situations” (p. 399). They also ask questions about whether you feel alienated from the self, with questions such as “I feel out of touch with the ‘real me’”—much like the experience described by Bill George as he was driving home that afternoon.

In sum, authenticity is not as ephemeral as it might appear. It has an emerging body of research (most recently see Gan, Heller, & Chen, 2018), and like Bill George we can recall instances of being inauthentic, often due to a desire to conform and impress others. In addition, as you might guess, researchers have found a large number of correlations between authenticity and well-being (Wood, et al, 2008). Thus, it is worth engaging in the question of authenticity, and making it a more salient experience to strive for throughout many contexts, even, and perhaps most importantly, at work.

References:

Craig, N., George, B., & Snook, S. (2015). The discover your true north fieldbook: A personal guide to finding your authentic leadership. Hoboken, NJ: Wiley.

Harter, S. (2002). Authenticity. In C. R. Snyder & S. J. Lopez (Eds.), Handbook of positive psychology (pp. 382-394). New York, NY: Oxford University Press.

Harter, S., Bresnick, S., Bouchey, H. A., & Whitesell, N. R. (1997). The development of multiple role-related selves during adolescence. Development and Psychopathology, 9, 835-853.

Gan, M., Heller, D., & Chen, S. (2018). The power of being yourself: Feeling authentic enhances the sense of power. Personality and Social Psychology Bulletin, 44(10), 1460-1472.

George, B. (2009). Good leaders are authentic leaders. Retrieved from https://youtu.be/r6FdIVZJfzg

Lenton, A. P., Bruder, M., Slabu, L., & Sedikides, C. (2013). How does “being real” feel? The experience of state authenticity. Journal of Personality, 81(3), 276-289.

Sheldon, K. M. (2004). Integrity [Authenticity, Honesty]. In C. Peterson & M. E. P. Seligman (Eds.), Character strengths and virtues (pp. 249-271). New York: Oxford University Press.

Strohminger, N., Knobe, J., & Newman, G. (2017). The true self: A psychological concept distinct from the self. Perspectives on Psychological Science, 12(4), 551-560.

Wood, A. M., Linley, P. A., Maltby, J., Baliousis, M., & Joseph, S. (2008). The authentic personality: A theoretical and empirical conceptualization and the development of the authenticity scale. Journal of Counseling Psychology, 55(3), 385-399.

Ryan Smerek is an associate professor and assistant director of the MS in Learning and Organizational Change program at Northwestern University. He is the author of Organizational Learning and Performance: The Science and Practice of Building a Learning Culture.

Photo Source: Michael Podger/Unsplash

What is Talent?

Talent is an oft-used word, and when it’s evoked we often nod in agreement that we know what someone means. In many cases, it might be a synonym for “intelligence” or in other domains it might mean “athleticism.” You can get away without pinpointing a precise definition for talent unless you are a scientist that is trying to explain performance. Then you want to know what precisely differentiates a world-class athlete, chess grandmaster, or anyone else at the pinnacle of their field.

I recently heard an excellent definition of talent, however, by Angela Duckworth at the University of Pennsylvania, who says:

“Talent—when I use the word, I mean it as the rate at which you get better with effort. The rate at which you get better at soccer is your soccer talent. The rate at which you get better at math is your math talent. You know, given that you are putting forth a certain amount of effort. And I absolutely believe—and not everyone does, but I think most people do—that there are differences in talent among us: that we are not all equally talented” (Duckworth, 2016).

What I like about this definition of talent is that it allows us to see improvement as a product of both innateness and effort. We may be improving at a slower rate, but we can still improve with effort.

This definition of talent also helps us persist. For example, if we are trying to improve in some domain and have high aspirations, we are continually reaching the edge of our current skills. Whenever we sense we are at this “edge,” and our performance is judged relative to others, we can interpret the relative feedback as evidence for a lack of talent or as talent being our rate of improvement. The latter interpretation helps us persist, while still allowing for talent differences. It is the classic tortoise and the hare story—others may be speeding rabbits in our domain, but given we aspire to excel, we may plod along like the tortoise, eventually reaching our goals with deliberate effort.

This is the story told of numerous experts in Anders Ericsson’s book Peak: Secrets from the New Science of Expertise. In domain after domain, Ericsson, a cognitive psychologist at Florida State University, finds those who engaged in persistent practice eventually reach the pinnacle of their field.

For Ericsson, whether it is learning to memorize hundreds of digits (a task he describes at the beginning of the book), practice and effort are at center stage. As he summarizes, “In the long run it is the ones who practice more who prevail, not the ones who had some initial advantage in intelligence or some other talent” (p. 233). Ericsson, a preeminent figure in the study of expertise and original researcher of the famed 10,000 rule (i.e. that it takes 10,000 hours of practice to become an expert*) is on the side that Duckworth mentions above that we are more equal than we assume. This can make for frustrating reading, as in almost every instance, Ericsson dismisses talent for any individual, even Einstein. He describes how neuroscientists found that Einstein had a “significantly larger than average inferior parietal lobule,” which is thought to play a role in mathematical thinking. In response, Ericsson asks:

“Could it be that people like Einstein are simply born with beefier-than-usual inferior parietal lobules and thus have some innate capacity to be good at mathematical thinking? You might think so, but the researchers who carried out the study on the size of that part of the brain in mathematicians and nonmathematicians found that the longer someone had worked as a mathematician, the more gray matter he or she had in the right inferior parietal lobule—which would suggest that the increased size was a product of extended mathematical thinking, not something the person was born with” (p. 44).

As the book continues, however, the extremism begins to moderate, and Ericsson begins to allow for aspects of innate differences to play a role, but only as a second-fiddle to practice. As he summarizes:

“I suspect that such genetic differences—if they exist—are most likely to manifest themselves through the necessary practice and efforts that go into developing a skill. Perhaps, for example, some children are born with a suite of genes that cause them to get more pleasure from drawing or from making music” (p. 237).

Ericsson’s firm position on the value of practice has been the result of working to pinpoint exactly the differences between elite performers and average performers, with explanations of innate differences being elusive in most domains. With the definition of talent offered by Duckworth, however, we don’t have to choose between innateness or effort—both are important—and, if Ericsson’s countless studies of elite experts are any indication, effort is more importance than we often assume.

*For an excellent overview of why this is not a “rule” exactly, and the many caveats needed after its popularization by Malcolm Gladwell in his book Outliers, see Ericsson & Poole, 2016, pp. 109-114.

References:

Duckworth, A. (2016, July 25). Angela Duckworth on grit. EconTalk [Audio Podcast]. Retrieved from http://www.econtalk.org/angela-duckworth-on-grit/

Ericsson, A., & Pool, R. (2016). Peak: Secrets from the new science of expertise. Boston: Houghton Mifflin Harcourt.

Photo Source: geralt/Pixabay

Do I Really Have to Be Actively Open-Minded?

Jason Fried, the co-founder of Basecamp, a project management software company, describes being at a conference and engaging with a fellow speaker. Fried had disagreed with the speaker and as he says:

“While he was making his points on stage, I was taking an inventory of the things I didn’t agree with. And when presented with an opportunity to speak with him, I quickly pushed back at some of his ideas” (2012).

In response, to Fried’s criticism, the speaker replied, “Man, give it five minutes.”

Fried says, “I asked him what he meant by that? He said, it’s fine to disagree, it’s fine to push back, it’s great to have strong opinions and beliefs, but give my ideas some time to set in before you’re sure you want to argue against them. ‘Five minutes’ represented ‘think,’ not react” (2012).

Alan Jacobs, in his book How to Think (2017) calls this entering “Refutation Mode—and in Refutation Mode there is no listening” (p. 18). In Refutation Mode you may even miss additional arguments and nuances that a speaker might give. Your emotional response and refutation of an early point shuts off additional incoming information. This is why, if someone has entered Refutation Mode, you might be surprised they didn’t hear that you’ve already addressed their point. You may have experienced this yourself in raising your hand in a seminar or class—once you do so you are fixated on what you will say to the point that your attention narrows and you stop listening to the conversation that is still occurring.

So what is the antidote to entering Refutation Mode? In Keith Stanovich’s impressive catalog of how to assess “good thinking” in his book The Rationality Quotient (with Richard West and Maggie Toplak), what emerges again and again is Actively Open-Minded Thinking.

Stanovich et al. (2016) measure Actively Open-Minded Thinking with a 30-item scale drawn from numerous sources, including items from a flexible thinking scale, Big 5’s openness to experience, and being able to resist dogmatism, among others (see also Stanovich & West, 1997). I won’t go into specifics on each dimension, but a few items should help see how Actively Open-Minded Thinking is assessed. These include:

1) “Beliefs should always be revised in response to new information or evidence.”

2) “I like to gather many different types of evidence before I decide what to do.”

3) “It is important to persevere in your beliefs even when evidence is brought to bear against them.” (R)

(R) indicates item is reverse scored (for additional items, see Stanovich, et al., 2016, p. 366).

The willingness to be open-minded, assess evidence, and update our beliefs takes cognitive effort. We need to override our initial impulses. This is further complicated when beliefs are central to us (Haidt, 2012). Nevertheless, it can serve as a cognitive aspiration. We will certainly fall short of being actively open-minded, but when we sense we are in Refutation Mode, we can try to momentarily recalibrate and see if being actively open-minded may serve us in the situation.

For example, in making billion-dollar investments, Ray Dalio, the founder of the world’s largest hedge-fund, places the dictate to be “Radically Open-Minded” as one of his key management principles. As he states:

“Radical open-mindedness is the ability to effectively explore different points of view and different possibilities…It requires you to replace your attachment to always being right with the joy of learning what’s true” (2017, p. 187).

But, you may argue, what are the limits to being Actively Open-Minded? Should I listen and engage in every opposing point of view? As Stanovich et al. (2016) describe, being Actively Open-Minded is not a disposition to maximize. However, we tend to be deficient in this disposition such that more is better. Thus, in being open-minded we obviously do not forgo having a point-of-view, but instead “practice being open-minded and assertive at the same time” (Dalio, 2017, p. 541).

So, the next time you see yourself in Refutation Mode—much like Jason Fried—see if you can stop yourself and be Actively Open-Minded. It won’t be easy, you may need “five minutes,” and you may not change your mind, but you’re likely to learn more in the process.

References:

Dalio, R. (2017). Principles: Life and work. New York: Simon & Schuster.

Fried, J. (2012, March 1). Give it five minutes. Retrieved from https://signalvnoise.com/posts/3124-give-it-five-minutes

Jacobs, A. (2017). How to think: A survival guide for a world at odds. New York: Currency.

Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York: Pantheon Books.

Stanovich, K. E., & West, R. F. (1997). Reasoning independently of prior belief and individual differences in actively open-minded thinking. Journal of Educational Psychology, 89(2), 342-357.

Stanovich, K. E., West, R. F., & Toplak, M. E. (2016). The rationality quotient: Toward a test of rational thinking. Cambridge, MA: MIT Press.

Photo Source: TeroVesalainen/Pixabay

What Happens When Two Competing Ideas are True in Managing People?

In teaching and in writing, I use a fair number of continuums to help move us off the traps of binary thinking. Our language tends to be in competing constructs that can obscure more nuanced views of reality, such as nature or nurture, fixed or growth mindset, smart or not smart, etc.* Sometimes these binary categories adequately represent reality when there are discrete shifts, but more often than not, they take us toward untenable extremes, open to so many counterexamples, that we may be whipsawed back-and-forth before we must admit that they are both partially true (e.g. it’s nature and nurture).

While continuums are useful, and they help us see how we might move back-and-forth along some binary given the context, there is another way of directly addressing the limits of our language—by understanding paradox.

If you are like me, however, when someone begins speaking about paradoxes, it seems like an overreaching attempt at being profound. I’ll try to avoid this trap as I describe what I think is a compelling research study on paradoxical leader behaviors (PLBs) that was recently published in the Academy of Management Journal.

In the article by Yan Zhang and colleagues, the researchers outline five dimensions of paradoxical leader behaviors, (I’ve included a sample measurement item in quotes):

  1. Treats subordinates uniformly while allowing individualization: “Assigns equal workloads, but considers individual strengths and capabilities to handle different tasks” (p. 548).
  2. Combines self-centeredness with other-centeredness: “Is confident regarding personal ideas and beliefs, but acknowledges that he or she can learn from others” (p. 548).
  3. Maintains decision control while allowing autonomy: “Makes decisions about big issues, but delegates lesser issues to subordinates” (p. 548).
  4. Enforces work requirements while allowing flexibility: “Is highly demanding regarding work performance, but is not hypercritical” (p. 548).
  5. Maintains both distances and closeness: “Maintains distance from subordinates at work, but is also amiable toward them” (p. 548).

These behaviors help us see continuing tensions that occur in management. You want high standards, but not to the extreme of being hypercritical; You also want close relationships, but you are not best friends; And you want to give employees autonomy but it needs to be balanced with control. Zhang et al. (2015) describe these as paradoxes because “rather than being ‘either–or,’ all things, including problems and challenges, are interrelated as ‘both–and’” (p. 539).

You might argue, however, that a person in a leadership role should flexibly shift their approach in either direction given the context, making this just a re-packaging of situational leadership and contingency theory. In response, the authors argue that choosing between these tensions should not be perceived as a “necessary evil,” as situational theorists might, but instead “to sustain long-term effectiveness, leaders must accept and harmonize paradoxes simultaneously” (p. 539). I’m not sure that contingency theorists would see choosing as a “necessary evil,” but the authors make a fair point about the need to harmonize and see the tensions as occurring simultaneously, rather than shifting behaviors discretely at different times. Although, perhaps it is both. In some conditions you are balancing these competing tensions, and in others you are shifting discretely.

Nevertheless, the research brings clarity to what can be hidden tensions, and the authors’ balanced approach helps overcome polemics about management derived from a selective reading of research (e.g. “Give everyone autonomy.”)

While these paradoxical leader behaviors are descriptively helpful, Zhang et al. (2015) also find that supervisors who scored higher on a combined measure of paradoxical leader behaviors had positive effects on subordinates. Specifically, in a sample of 76 supervisors and 588 subordinates across six companies in China, subordinates had higher work role performance as measured by scales of proficient, adaptive, and proactive behavior. But why would this be the case?

The authors argue that a leader serves as a role model “to show employees how to accept and embrace contradictions in complex environments” (p. 545), which leads to more effective action. Second, the authors argue that paradoxical leader behaviors more effectively create bounded and discretionary work environments. As they state, “bounded environments stress norms and standards whereby followers understand their work roles and responsibilities. At the same time, PLB gives followers discretion and individuality within the structure, allows them to be the focus of influence to maintain their dignity and confidence” (p. 545).

All of this sounds intuitively plausible and brings an Eastern perspective to harmonizing competing tensions. It also helps build additional nuance into our thinking, (see an earlier post on mental complexity).

As you think about the five paradoxical leader behaviors listed above, which one resonates the most with you? Are there other paradoxes related to managing that you would include? When does an “either/or” perspective seem to obscure the phenomenon, and “both/and” is needed?

References:

Walker, B. M. & Winter, D. A. (2007). The elaboration of personal construct psychology. Annual Review of Psychology, 58, 453-477.

Zhang, Y., Waldman, D. A., Han, Y-L., & Li, X-B. (2015). Paradoxical leader behaviors in people management: Antecedents and consequences. Academy of Management Journal, 58(2), 538-566.

*This is often referred to as our “personal constructs” derived from George Kelly’s (1955) The Psychology of Personal Constructs. For an updated overview of personal construct psychology, see Walker & Winter (2007).

Ryan Smerek is an assistant professor and assistant director of the MS in Learning and Organizational Change program at Northwestern University. He is the author of Organizational Learning and Performance: The Science and Practice of Building a Learning Culture.

Photo Source: Pixabay/Geralt

What Explains the Puzzle of Independent Thinking?

If you were like me in the aftermath of the 2008 financial crises, you often heard (and believed) that no one saw this coming. There were prominent firms going bankrupt that had thousands of people working in them who had every reason to see the risk they were taking. If no one saw this coming, the financial crises must have been the result of hidden and overwhelming financial complexity, beyond the realm of anyone’s ability to see it.

When I read Michael Lewis’ book The Big Short, this was a dramatic blow to this naïve belief. Many individuals did see this coming, and it was more than a half-formed opinion—they made million-dollar bets based on their analysis. But why did the individuals portrayed in the book have it figured out while so many others were caught blindly unaware?

I posed this question to an old college friend who has spent his career at prominent investment banks and hedge funds in New York City. He responded that you had to be an outsider to see it. To a large degree, the individuals in the Big Short were outsiders to Wall Street, although not exclusively as seen by Steve Carell’s character in the movie adaptation of The Big Short.

So what might account for the independent thinking that allowed many individuals to see what others couldn’t? While early in this research, there are likely a cluster of thinking habits (and situational conditions) that help foster independent thinking. Some of these thinking habits are outlined in the “employee voice” literature in the field of management. Employee voice is when individuals speak up at work, sometimes for ethical violations, but sometimes because they see things differently and have the courage to speak their minds. (For an overview of the employee voice field see Chamberlin, Newton, & LePine, 2017).

Some of the thinking habits of the employee voice literature include having a proactive personality and desire to make improvements, but these habits hardly explain the puzzle of independent thinking, especially in the face of the financial crisis.

One clue is that our thinking and motivation is powerfully shaped by wanting to be part of a social group. We are powerfully motivated to belong and to be a part of an “Inner Ring” (Lewis, 1944). This means we agree with the dominant view and desperately fear being ostracized (Eisenberger, Lieberman, & Williams, 2003). Those who are outsiders are free of these constraints. This is certainly one plausible candidate to partly explain independent thinking.

In research I have done on employee voice, many interviewees discuss a spontaneous response that breaks with the social norms of their immediate reference group. The reason often given for this response is “integrity.” Respondents mention not being able to live with “integrity” if they didn’t speak their mind and defend a core value. There would be an unacceptable split between how they were acting and what they truly believed.

While being relatively immune to being a member of the “Inner Ring” and having a sense of integrity are both important, I think there was something else at play with individuals able to see the forthcoming financial crisis. For lack of a better term, I would call it having a “commitment to the truth,” meaning a dogged persistence to look at reality as clearly as possible and listen to any hint of delusion within oneself and those around you.

It’s hard to know how a “commitment to the truth” is developed, perhaps with exposure to scientific thinking, or being part of an organization committed to truth (see Dalio, 2017). There are likely many ways this disposition is developed. If you have some candidates, please comment below any ideas you may have.

References:

Chamberlin, M., Newton, D. W., & LePine, J. A. (2017). A meta-analysis of voice and its promotive and prohibitive forms: Identification of key associations, distinctions, and future research directions. Personnel Psychology, 70, 11-71.

Dalio, R. (2017). Principles: Life and work. New York: Simon & Schuster.

Eisenberger, N. I., Lieberman, M. D., & Williams, K. D. (2003). Does rejection hurt? An fMRI study of social exclusion. Science, 302, 290–292.

Lewis, C. S. (1944). The Inner Ring. Memorial Lecture at King’s College, University of London. Retrieved from http://www.lewissociety.org/innerring.php.

Ryan Smerek is an assistant professor and assistant director of the MS in Learning and Organizational Change program at Northwestern University. He is the author of Organizational Learning and Performance: The Science and Practice of Building a Learning Culture.

Photo Source: Clker/Pixabay

How Can We Promote Valid Learning?

In the movie Groundhog Day (if you haven’t seen it), Bill Murray plays a narcissistic TV weatherman, who is sent to Punxsutawney, PA to report whether the groundhog, Punxsutawney Phil, sees his shadow. Murray, instead, finds himself repeating the same day, again and again. It doesn’t matter what he does, he’ll wake up again on Groundhog Day, in the same bed and breakfast, with the alarm clock radio waking him up to the song, “I Got you Babe.” Recognizing his predicament, Murray begins to indulge in every base-impulse—leading to humorous scenes of Murray gorging on every donut in a diner, and driving Punxsutawney Phil off a cliff in a pick-up truck, among other sordid examples.

Throughout the movie Bill Murray learns from experience, that his selfish and indulgent behavior leads to repeated negative outcomes. This includes being repulsive to his love interest (Andie MacDowell). Slowly, over dozens of cycles, his character begins to transform into a “Renaissance Man.” He learns to speak French and how to play the piano. Humorously, his helpful actions around town make Murray the target of a bidding war at a charity bachelor auction that evening. Needless to say, Andie MacDowell can’t help but be impressed at Murray’s redemptive transformation.

Contrast how Bill Murray learned from experience with one-shot experiences. When we are in one-shot experiences (e.g. choosing a college, a first job, a home) it is much costlier to “learn from experience” (see also Thaler, 2015, Chapter 6). In one-shot experiences, we can’t alter our approach slightly and see the results we get (as Bill Murray did), it is harder to learn what will produce better outcomes.

Without many feedback cycles to validly learn from experience, we are more prone to “superstitious learning” (March & Olsen, 1975), meaning in the search for causality we attribute a prior act as the cause of what we experience. Maybe we received a “lucky rabbit’s foot” as a gift prior to making a big, one-shot decision. We might superstitiously attribute the consequences we experience to that prior act. Or, more relatedly, we might “learn” that the shadow of a groundhog somehow predicts future weather conditions. 🙂

Likewise, when we try to explain organizational performance, we may attribute some social norm as the cause of performance. This may be plausible, but in other cases it is an extraneous social norm that has no causal impact. For example, I often teach a case study about Apple and the turnaround of the company by Steve Jobs. Steve Jobs had many strengths and talents, but he also had some well-known tendencies to treat people in a less than respectful manner. It’s cognitively-tempting to attribute the success of Apple to the accountability created by his berating-style of management. We’ll superstitiously learn that a berating management style leads to superior performance. In the one-shot experience of Steve Jobs turning around Apple, we cannot go back in time and see if they still would have succeeded had they removed Jobs’ style of management. Perhaps it was primarily Steve Jobs’ unique eye for talent and his focus on the beauty and simplicity of design that was causally important, and Apple succeeded in spite of his berating management style.

The point is to recognize if we are learning like Bill Murray in a “Groundhog Day-like” environment where we can improve with constant feedback and repetition (see also Kahneman & Klein, 2009), or if we are in a one-shot experience. If we are in a one-shot experience the likelihood of superstitious learning is high—heightening the need for tempering our conclusions. This also heightens the need to learn how to learn. By this, I mean more systematically designing experiments to foster more valid learning. For example, rather than implementing only one option (given it is cost effective), we can test two options (much like A/B testing in web analytics where users receive one of two options and the designer can see which option leads to higher click-through rates.) By testing two options we can more validly learn what produces better results, rather than making attributions that may be superstitiously-derived.

So the next time you see yourself attributing causality (and more importantly taking action based on that attribution), ask yourself if there are other plausible attributions and how you know what you know. And, given the situation affords the opportunity to test various options, work to design ways to more deliberately create feedback cycles to validly learn. You may not redemptively transform like Bill Murray does, but you’ll increase the odds of reaching your goals.

References:

Kahneman, D., & Klein, G. (2009). Conditions for intuitive expertise: A failure to disagree. American Psychologist, 64(6), 515-526.

March, J. G. & Olsen, J. P. (1975). The uncertainty of the past: Organizational learning under ambiguity. European Journal of Political Research, 3,147-171.

Thaler, R. H. (2015). Misbehaving: The history of behavioral economics. New York: Norton

Ryan Smerek is an assistant professor and assistant director of the MS in Learning and Organizational Change program at Northwestern University. He is the author of Organizational Learning and Performance: The Science and Practice of Building a Learning Culture.

Photo Source: Alexas_Fotos/Pixabay