I first ran across the word almost 30 years later, when, early in my career as a doctoral student, I was assigned a Psychology Today reading with the curious title, “Bafflegab Pays.“ In the article, author J. Scott Armstrong, a marketing professor at Wharton, offered this direct advice: “If you can’t convince them, confuse them.“
In the years since, the street translation of this advice has become part of the lexicon of most American workers: “If you can’t dazzle them with brilliance, baffle them with __. “ You know the rest.
But Armstrong’s bafflegab suggestion didn’t seem to be offered as irreverent, tongue-in-cheek advice. Instead, his article provided compelling data to support the wisdom and efficacy of employing bafflegab in academic settings, and seemed to warrant serious attention, particularly from doctoral students whose academic careers would depend on convincing journal editors of their research competence.
In his research, Armstrong found a direct correlation between how difficult articles were to read and comprehend, and how the journals they appeared in were respected among academics. In other words, when professors ranked the prestige of management journals, the top-rated journal was the hardest to read, the lowest-rated journal, the easiest. And when Armstrong rewrote some passages from the hard-to-read, highly ranked journals to make them easier to read, those same professors now rated the easier-to-understand versions less competent than the difficult versions, even though their conceptual content had been carefully preserved.
Excited by the thought that I had just been handed the key to success in academe, I embarked on my own bafflegab journey. I quickly found, however, that there were strong forces aligned against me.
Almost immediately it became clear that when it came to writing unintelligibly, doctoral students did not enjoy the same perception of competency that Armstrong had reported among professors. His observations notwithstanding, my adviser spared no opportunity to tell me that I made no sense.
Later, and much to my dismay, I realized, too, that my PhD committee actually expected me to be able to explain how my esoterically titled dissertation, “The Effects of Reference Dependence on Decision Difficulty,“ could inform the practices of real-life managers.
Later still, as a new, tenure-track faculty member, I again realized that when my students asked me what exactly I did for a living, I would need to have an adequate and understandable response at the ready. Bafflegab wasn’t going to suffice.
As management education enters the 21st century, at least three forces further challenge how we conduct and, more important, communicate our research to our stakeholders.
First, media rankings of business schools, such as those by Business Week and U.S. News & World Report, affect administrators, alumni, donors and especially prospective students who see a devaluation of their degrees with lower rankings. The influential Business Week rankings, for example, give 90 percent weight to their survey of graduating students and recruiters, and 10 percent weight to faculty publications. Although a BW ranking may not be the true measure of a school’s quality, and could be tempting schools to “look good rather than be good,“ there is, nevertheless, an urgent need to find a way to express our research that is intelligible and responsive to the market.
Second, executive education is a strong component in the strategy of business schools and requires, if not an outright, fundamental shift from research to teaching, at least a need to develop materials that are relevant to the practice. When experienced managers sit in the classroom, their question is how can they use what we teach to improve their business. Such questions, once again, challenge us to look at our research in practical ways.
page 1 | page 2