63183.fb2 Plutocrats - читать онлайн бесплатно полную версию книги . Страница 4

Plutocrats - читать онлайн бесплатно полную версию книги . Страница 4

THREESUPERSTARS

A society in which knowledge workers dominate is under threat from a new class conflict: between the large minority of knowledge workers and the majority of people, who will make their living traditionally, either by manual work, whether skilled or unskilled, or by work in services, whether skilled or unskilled.

—Peter Drucker

It is probably a misfortune that… popular writers… have defended free enterprise on the ground that it regularly rewards the deserving, and it bodes ill for the future of the market order that this seems to have become the only defense of it which is understood by the general public…. It is therefore a real dilemma to what extent we ought to encourage in the young the belief that when they really try they will succeed, or should rather emphasize that inevitably some unworthy will succeed and some worthy fail.

—Friedrich Hayek

It is possible that intelligent tadpoles reconcile themselves to the inconvenience of their position by reflecting that, though most of them will live and die as tadpoles and nothing more, the more fortunate of the species will one day shed their tails, distend their mouths and stomachs, hop nimbly on to dry land, and croak addresses to their former friends on the virtues by means of which tadpoles of character and capacity can rise to be frogs.

—R. H. Tawney

THE INTELLECTUALS ON THE ROAD TO CLASS POWER

When Shelley described poets as “the unacknowledged legislators of the world,” he was referring to the moral and imaginary power of the creative class, not suggesting that it actually controlled the machinery of the state or pulled the levers of the economy. But a samizdat manuscript written in 1973–74 and later smuggled out of communist Hungary asserted precisely that. The Intellectuals on the Road to Class Power, by novelist György Konrád and sociologist Ivan Szelényi, argued that Marx’s vision of a communist state run by the working class, or indeed of an eventual utopia in which the state withered away entirely, had been perverted. Instead, a new class had seized power: the class of engineers, economists, physicists, and, yes, even poets—which is to say, the intellectuals.

Konrád and Szelényi’s book was a revolutionary act—its authors retreated to a village in the Buda Hills to write it in an effort to evade the secret police, and they buried their manuscript in the garden every night, to protect it from being seized in a feared early morning raid. The book caused a predictable splash when it was published in the West in 1979, five years after it had been written—this was, after all, the beginning of the final triumphant chapter of the cold war, the year Ronald Reagan was elected and Leonid Brezhnev was starting the fifteenth year of his reign as general secretary of the Communist Party of the Soviet Union. Anything that discredited the so-called workers’ paradise, particularly from the inside, was a geopolitical event.

The Intellectuals on the Road to Class Power built on the arguments of an even more groundbreaking work smuggled out of Eastern Europe a generation earlier: Milovan Djilas’s The New Class. Writing in the seventies, Konrád and Szelényi were themselves members of the somewhat threadbare but socially cosseted socialist intelligentsia they described, though they were not members of the Communist Party. Djilas—Tito’s right-hand man during the partisan struggle and his emissary to Stalin’s Kremlin court—belonged to the earlier revolutionary generation. Djilas’s book, which earned its writer a seven-year prison sentence in the same jail he had been sent to for his revolutionary activities in the 1930s, was an instant international sensation, and rightly so. It was the first time a senior Soviet bloc official publicly condemned the system he had helped to create. Written thirteen years after George Orwell had made the same charge in his allegorical Animal Farm, Djilas made the ideologically devastating argument that the so-called workers’ state had simply replaced the old ruling bourgeoisie with a new class, the communist apparat. He even estimated the material gap between this new elite and the people it ruled, citing Soviet dissident Yuri Orlov’s report that a rayon secretary—the head of a provincial or city party organization—earned about twenty-five times more than the average worker.

The important twist Konrád and Szelényi added to this analysis was that the rule of this new class actually amounted to a seizure of political and economic power by the intellectuals—heredity and military might determined power under feudalism; money and commercial acumen were the source of control under capitalism. Under communism, they asserted, technical skills and higher education were the most important defining characteristics of the new party elite.

There was a lot of truth to their analysis, and it is one reason some members of the old Eastern European and Soviet intelligentsias, not to mention their friends in the West, are nostalgic for the old order. But if you read The Intellectuals on the Road to Class Power today, the most striking paradox about this dissident dissection of Warsaw Pact socialism is how powerfully it applies to twenty-first-century global capitalism. For the intellectual class Konrád and Szelényi studied, highly educated technocrats, the collapse of communism, and the emergence of a global market economy turned out to be the true road to class power.

The language of twenty-first-century Western economists is rather less colorful than that of 1970s central European dissidents. That’s why you won’t find many references to the rise of the technocrats to class power in the American academic debate of the early twenty-first century. But there is intense study of the impact of “skill-biased technical change” on income distribution, particularly in the developed Western economies. The consensus, advanced most powerfully by MIT economist David Autor, is that skill-biased technical change has indeed brought the technocrats to class power. As Autor puts it, it has polarized the labor market, with huge rewards for those at the top, who have the skills and education to take advantage of new technologies, not much impact for those who do the low-paying “lousy” jobs at the bottom, and a hollowing out of the well-paying jobs in between that used to support the middle class.

There is, of course, a fierce debate about what is causing rising income inequality, and the most honest students of the phenomenon attribute it to a number of factors. But there is broad agreement that skill-biased technical change is a crucial, and possibly the crucial, factor. In a January 2012 speech about income inequality, Alan Krueger, a Princeton economist who now heads President Barack Obama’s Council of Economic Advisers, reported one indicator of that consensus. In the mid-1990s he polled a nonrandom group of professional economists attending a conference at the New York Fed. They overwhelmingly named technological change as the main driver of income polarization—more than 40 percent said it was the chief cause. In a touching sign of humility, the second most popular explanation was “unknown.” Third was globalization. Political shifts, like the decline in the minimum wage and the decline in unionization, came in behind these top three.

There’s another reason the rise of the intellectuals to class power in global capitalism isn’t always immediately apparent within that favored group. That’s because not all of the highly educated are prospering equally. If you have a PhD in English literature, you probably don’t feel you are a member of the ruling elite. And even within tribes whose training vaults them collectively into the 1 percent—like bankers, lawyers, or computer programmers—there’s a twist to the impact of skills-biased technological change that lessens the sense of group prosperity. This is what economists call the “superstar” effect—the tendency of both technological change and globalization to create winner-take-all economic tournaments in many sectors and companies, where being the most successful in your field delivers huge rewards, but coming in second place, and certainly in fifth or tenth, has much less economic value.

The triumph of the nerds is intuitively obvious in the postindustrial economies of the developed West, where brains have had more value than brawn for a couple of generations. But in today’s era of the twin gilded ages, the triumph of the intellectuals is a global phenomenon. The highly educated are in the vanguard of India’s outsourcing miracle; the intellectuals, especially their “technical” branch, are very much in charge in communist China; and even the Russian oligarchs, who are better known in the West for their yachts and supermodel consorts, overwhelmingly have advanced degrees in math and physics.

The rise of the geeks, particularly the super-achievers among them, is a sharp break from the postwar era, when the robust economic recovery in the United States and western Europe was driven by the rise of a vast, and culturally dominant, middle class, much of it employed in blue-collar or relatively routine midlevel clerical, administrative, and managerial jobs. The disappearance of these opportunities, at a time when the super-smart are prospering as never before, is one reason for the populist antipathy toward the nerds. The impulse is strikingly bipartisan—the conservative Tea Party is every bit as hostile toward elites as is Occupy Wall Street, which has defined itself as the forum of the 99 percent.

Ironically—and frustratingly, for those in the discontented middle—the class power of the intellectuals is such that they are rising to the top of the political heap on both the left and the right. Indeed, at a time of fierce partisan conflict, one of the striking paradoxes is how much the champions of liberals and conservatives have in common: Mitt Romney and Barack Obama are both disciplined, dogged millionaires who describe their more popular wives as their better halves, hold degrees from Harvard Law School, and have a preference for data-driven arguments rather than emotional ones. Both men struggle to connect with the grassroots of their parties, coming across as cold and robotic.

You might call it the cognitive divide—the split between an evidence-based worldview and one rooted in faith or ideology—and it is one of the most important fault lines in America today. To his critics on the right, Obama is a socialist with dangerous foreign antecedents. To his critics on the left, he is a waffler with no real point of view and a craven desire to be liked. But the best explanation is that, like the rest of the rising intellectual class to which he belongs, the president is an empiricist. He wants to do what works, not what conforms to any particular ideology or what pleases any particular constituency. His core belief is a belief in facts.

Obama the empiricist is not the man who surged from behind to win the 2008 presidential election. That candidate was the Obama of soaring rhetoric, who promised hope and change. But the pragmatist has always been there. Writing in September 2008, several weeks before the presidential elections, Cass Sunstein, who has gone on to serve in the White House, had this to say about his candidate: “Above all, Obama’s form of pragmatism is heavily empirical; he wants to know what will work.” Word crunchers found that the president’s 2009 inaugural address was the first one to use the term “data” and only the second to mention “statistics.”

That cognitive approach is one reason Obama attracted so much support, especially among the younger generation, on Wall Street and in Silicon Valley. That wasn’t some sentimental betrayal of class interests—what Lenin is said to have called the useful idiocy of the capitalists who bankrolled the Bolsheviks; it was a recognition that Obama was an almost perfect embodiment of the super-elite that rules today’s global economy. Obama is a data-driven technocrat, and so are the traders and the Internet entrepreneurs. As one insider who is equally familiar with Wall Street and with Washington, D.C., told me: “You want your money managed by people who are responsive to evidence, who care about results, and who understand that the world is an uncertain place. Obama wants to get his economic advice from the same sorts of people.”

By training, by temperament, and by life experience, Mitt Romney, too, belongs squarely to the empiricist camp; it is hard to make millions in private equity without appreciating the power of data. What looks like flip-flopping to the Republican base can equally be understood as Romney’s effort to bridge the cognitive divide.

The super-geeks don’t just rule Wall Street, Silicon Valley, Bangalore, and Beijing. They are in charge in Washington, too—no matter which party wins.

ELIZABETH BILLINGTON—DIVA FOR THE FIRST GILDED AGE

Elizabeth Billington was a diva, a celebrity—and a superstar. Today, many music scholars judge her to be the greatest English soprano; contemporary critics described her as “the Goddess of Song.” At the invitation of the king, she sang at the Naples opera house, then the most prestigious in the world, where she was the heroine of a new opera, Ines di Castro, written especially for her. Her Italian tour was such a success that after her recovery from an illness in Venice the opera house was illuminated for three nights. In Milan she was warmly received by the empress Joséphine.

At the height of her fame, Sir Joshua Reynolds, at the time Britain’s most popular portraitist, painted Mrs. Billington as Saint Cecilia, about to be crowned with laurels by one cherub, and listening to the singing of four others. The woman on the canvas has a gleaming mane of hair, a perfect oval face, and large, expressive eyes, but her fans complained that it didn’t do her justice. “How could I help it?” Reynolds is said to have challenged his critics. “I could not paint her voice.” When Haydn, a lifelong friend, saw the painting, he told Reynolds: “It is like, but there is a strange mistake. You have made her listening to the angels; you should have made the angels listening to her.”

Mrs. Billington was famous among the hoi polloi, too. When an unauthorized biography of her was published on January 14, 1792, it sold out by three p.m. The sensational highlight: intimate letters she had written to her mother, containing vivid accounts of, as Haydn described them, “her amours,” a group rumored to include the Duke of Sussex and even the Prince of Wales.

Her talent and her celebrity and the international demand for her performances gave her pricing power. In 1801, when Mrs. Billington returned to Britain after seven years in Italy, the managers of both Drury Lane and Covent Garden, London’s two most prestigious opera houses, fought a bidding war for her voice. Mrs. Billington finessed that struggle with an unprecedented compromise: she sang alternately at both houses, and was paid £3,000 for the season, plus a £600 bonus, and a £500 contract for her violinist brother to lead the orchestra whenever she performed. Her total income that year was believed to exceed £10,000, enough to employ five hundred farm laborers, and as much as the annual rents collected by Elizabeth Bennet’s opulently wealthy Mr. Darcy, who made his fictional debut twelve years later.

Writing nearly a century later, in 1875, Alfred Marshall, the father of modern economics, used Mrs. Billington as an example of one of the consequences of the unprecedented increase in national GDP that Britain was just beginning to experience at the turn of the nineteenth century, thanks to the industrial revolution. Growing prosperity, Marshall believed, meant richer paydays for the most skilled practitioners of every trade and profession, even as the industrial revolution drove down the incomes of ordinary artisans. He was watching the birth of the superstar economy.

Here’s how Marshall, the first truly sympathetic student of the economic impact of the industrial revolution, described what was happening: “The relative fall in the incomes to be earned by moderate ability… is accentuated by the rise in those that are obtained by men of extraordinary ability. There was never a time at which moderately good oil paintings sold more cheaply than now, and… at which first-rate paintings sold so dearly.”

One cause of this premium on super-talent, Marshall believed, was the “general growth of wealth” created by the industrial revolution. The national tide was rising, and the boats of the superstars were rising the most quickly with it. This broader economic transformation, Marshall argued, “enables some barristers to command very high fees; for a rich client whose reputation, or fortune, or both, are at stake will scarcely count any price too high to secure the services of the best man he can get: and it is this again that enables jockeys and painters and musicians of exceptional ability to get very high prices.”

Of course, the painters, musicians, jockeys, and barristers Marshall describes weren’t the first talented artists and professionals to command a premium for their talents. China’s Ming Dynasty, which ruled the Middle Kingdom from the fourteenth to the seventeenth centuries, prized painting; Qiu Ying was once paid one hundred ounces of silver to paint a long hand scroll as an eightieth-birthday gift for the mother of a wealthy patron. Artists were the superstars of Renaissance Italy, profiting from the rise of a new commercial elite much as Mrs. Billington did. Nor has culture been the only arena in which the superstars can earn huge rewards: the lords and princes of the Middle Ages bid for the services of Europe’s best mercenary knights; modernizing Russian sovereigns, such as Peter and Catherine, paid top ruble for Western technical and military expertise.

But Marshall was one of the first to point out that the industrial revolution had made superstars shine more brightly than ever, both by increasing the prices top talent could command and by pushing down the relative wages of many of the artisans and professionals lower down the ladder, through new technologies and more widely diffused skills.

As the industrial revolution gathered strength, the later phenomenon was part of the conventional wisdom about what was happening in English society. One of Marshall’s examples—sound familiar?—was the declining wages of the clerical class: “A striking instance is that of writing… when all can write, the work of copying, which used to earn higher wages than almost any kind of manual labour, will rank among unskilled trades.” Most of us are more familiar with a more violent episode in the redundancy of once valuable skills—the machine-busting revolt of the Luddites, hand-loom weavers who protested the introduction of wide-framed, automated looms that made their trade pointless. The Luddite protests began in 1811, a decade after Mrs. Billington’s £10,000 triumph.

Marshall had the brilliance to understand that the two processes were connected—the mechanization that put the hand-loom weavers out of work was a tragedy for those individuals, but it was part of a broader economic transformation that greatly enriched the country as a whole. Among the beneficiaries of that growing national wealth were superstars like Mrs. Billington.

Already in the nineteenth century, the most successful superstars capitalized on—and, indeed, cultivated—an international market for their services: Mrs. Billington started her serious professional career in Ireland, and then made the jump back home to London. Her debut in Italy, the most prestigious music market in the world at the time, was carefully orchestrated with the help of the aristocratic English friends her London fame had won her. Mrs. Billington’s subsequent Italian success increased her cachet even further, and when she returned to London she was able to command a much higher fee.

But even though Mrs. Billington was a beneficiary of globalization, Marshall believed there was a physical limit to how much she, or any other superstar, could capitalize on the international market for her services. After all, as he observed with the asperity of someone pointing out the obvious, “The number of persons who can be reached by a human voice is strictly limited.”

Marshall’s remark about the natural constraint on the income Mrs. Billington and her successors could demand is just a footnote on page 728 of his magnum opus. But it has had a much cited afterlife in the economic literature because it is the rousing conclusion to a seminal 1981 paper by University of Chicago economist Sherwin Rosen, in which he explained how the twentieth-century technology revolution had further magnified the income of superstars. After quoting Marshall’s reference to Mrs. Billington and the impossibility of scaling her work, Rosen argued: “Even adjusted for 1981 prices, Mrs. Billington must be a pale shadow beside Pavarotti. Imagine her income had radio and phonograph records existed in 1801!”

CHARLIE CHAPLIN—IT GETS BETTER

In fact, when it came to capitalizing on the financial potential of new technology, Pavarotti came late to the game. The shift had begun nearly a century earlier, with the invention of the phonograph, the radio, and, crucially, movies. Consider the career of Charlie Chaplin. Born in 1889, roughly 125 years after Elizabeth Billington, he was, like her, a native Londoner with a prodigy’s talent for performance. Mrs. Billington had made her debut at age nine; by the time Chaplin was nine years old he was on the road, rehearsing and performing in two or three shows a day. His appeal, too, was international—he made his first U.S. tour in 1910, traveling the country for two years. But, for all his energy, like Mrs. Billington, in his live performances he was constrained by the physical reach of the human voice and the distance the human eye could see.

But Chaplin was lucky. In 1867 American inventor William Lincoln patented a device he called “the wheel of life,” through which animated pictures could be viewed. The motion picture era really took off after 1895 (six years after Chaplin’s birth), when French brothers Louis and Auguste Lumière invented the cinématographe, the first portable motion picture camera, projector, and printer. Public adoption wasn’t immediate—a disappointed Louis Lumière fretted that “the cinema is an invention without a future”—but within a generation, the way people were entertained had been transformed. In 1900, nearly all spectator entertainment was provided by live performers. By 1938, live acts accounted for just 8 percent of all public entertainment. In the mid-1920s, before the introduction of sound in movies, Americans spent $1.33 per capita on theater, versus $3.59 on movies; by 1938, the spending had further tilted in the direction of film—down to $0.45 on live performances and up to $5.11 at the movies.

Chaplin became the first global superstar of this new medium. He had an uncertain start—Mack Sennett, Chaplin’s first studio boss, deemed the actor’s film debut in the 1914 picture Making a Living “a costly mistake.” But that same year Chaplin also created the character of “the Tramp” for a series of Keystone movies. The character and the actor almost instantly became global celebrities—elsewhere in the world the Tramp took on such names as Charlot, Der Vagabund, and Carlitos. Just two years after the Tramp’s debut, Chaplin was enough of a superstar to command $670,000 to produce a dozen two-reel comedies over the next year for the Mutual Film Corporation. Adjusted for inflation, that was roughly double Mrs. Billington’s £10,000 fee in 1801.

Technology had created a new way for live performers to become superstars. In Alfred Marshall’s world, superstars had emerged thanks to the increased wealth of society as a whole, particularly its richest members. That meant lawyers, doctors, jockeys, painters, and opera singers could demand higher fees of their ever wealthier clients.

But Marshall’s superstars couldn’t benefit from one of the great innovations of the industrial revolution—mass production. They were limited by the reach of the human voice. (Thanks to the printing press, writers were something of an important exception. In 1859, Anthony Trollope, a successful writer but not quite a superstar, was paid £1,000 for the novel that became Framley Parsonage, provided he could write it in six weeks.) That was why, in Marshall’s view, the bigger winners of the industrial age were businessmen. They were the ones who could take advantage of “the development of new facilities for communication, by which men, who have once attained a commanding position, are enabled to apply their constructive or speculative genius to undertakings vaster, and extending over a wider area, than ever before.”

Sherwin Rosen understood that, in the twentieth century, culture had been industrialized, too. Advances in communications technology had allowed talented individuals to take advantage of the same economies of scale: “The phenomenon of Superstars, wherein relatively small numbers of people earn enormous amounts of money and dominate the activities in which they engage, seems to be increasingly important in the modern world.” The key to the shift, Rosen argued, was “personal market scale” and the power of new technologies to increase the size of that personal market. Like nineteenth-century industrialists, the superstars of the twentieth century reached vast markets, and because technology and volume drastically reduced the cost per unit or per performance, they created new markets, too.

New technology squeezed out the old, but it also increased the overall size of the market. Live performance—Mrs. Billington’s only profession, and Charlie Chaplin’s first one—accounted for a smaller piece of the entertainment pie. But thanks to movies and the radio, people devoted more of their time to commercial entertainment, creating a bigger market for the top performers.

Writing in 1981, Rosen, the inventor of the modern theory of what he called “the economics of superstars,” knew the technology revolution was still unfolding. He ends his paper wondering what impact the coming wave of technology would have on his superstars: “What changes in the future will be wrought by cable, video cassettes and home computers?”

The Internet wasn’t featured on Rosen’s list—its commercial introduction was still a few years away—but once it began to make itself felt as a mass phenomenon, there were a lot of good reasons to think this new technology would be the one that would bring an end to superstar economics. This, in the term popularized by its most visible advocate, Chris Anderson, is the theory of the long tail. As Anderson argued in his 2004 essay of that name, the long tail is “an entirely new economic model for the media and entertainment industries, one that is just beginning to show its power…. If the 20th century entertainment industry was about hits, the 21st will be equally about misses.” Anderson’s point was that technology meant the end of the era of the blockbuster and the superstar; instead the new century would be the golden age of the niche artist and small audience.

It hasn’t quite worked out that way. While a great business can be built by bringing together millions of sales along the long tail—think Google—for individuals, the income gap between the superstars and everyone else is greater than ever. We see that in the overall income distribution, with the top 1 percent earning around 17 percent of the national income, and we see it within specific professions—in banking, in law, in sports, in entertainment, even in a quotidian profession like dentistry—those at the top are pulling ahead of everyone else. This superstar economics is one of the reasons we are seeing the emergence of the global super-elite.

ALFRED MARSHALL IS VINDICATED

Part of what is happening is an intensification of the rising-tide effect first noticed by Marshall more than a century ago. As the world economy grows, and as the super-elite, in particular, get richer, the superstars who work for the super-rich can charge super fees.

Consider the 2009 legal showdown between Hank Greenberg and AIG, the insurance giant he had built. It was a high-stakes battle, as AIG accused Greenberg, through a privately held company, Starr International, of misappropriating $4.3 billion worth of assets. For his defense, Greenberg hired David Boies. With his trademark slightly ratty Lands’ End suits (ordered a dozen at a time by his office online), his midwestern background, his proud affection for Middle American pastimes like craps, and his severe dyslexia (he didn’t learn how to read until he was in the third grade), Boies comes across as neither a superstar nor a member of the super-elite. But he is both.

Boies and his eponymous firm earned a reputed $100 million for the nine-month job of defending Greenberg. That was one of the richest fees earned in a single litigation. Yet, for Greenberg, it was a terrific deal. When you have $4.3 billion at risk, $100 million—only 2.3 percent of the total—just isn’t that much money. (Further sweetening the transaction was the judge’s eventual ruling that AIG, then nearly 80 percent owned by the U.S. government, was liable for up to $150 million of Greenberg’s legal fees, but he didn’t know that when he retained Boies.)

It is this logic of big numbers that is driving up the fees of Boies and a small cadre of elite lawyers. The willingness of richer clients, with more at stake, to pay higher fees is why even those superstars who aren’t directly affected by globalization or the technology revolution can nevertheless benefit from them. Boies has never lived outside the United States, speaks only English, travels overseas for an annual biking holiday in southern Europe, and has never appeared in a non-American court. He is something of a Luddite, as well—he sends fewer than a dozen e-mails a week and was only recently persuaded by his wife to adopt an iPad, which he mostly uses to check stock prices. But because globalization (Hank Greenberg is one of the pioneers of globalization, nearly as at home in Beijing as he is in Manhatten) and technology have made his clients rich, they have made Boies a superstar, too.

If you are a plutocrat, there is a sound economic rationale for paying a brilliant litigator like David Boies such a premium. But it is not just the stellar providers of business services, like lawyers, who profit from the rise of the super-elite. Purveyors of luxury, like interior designers, are becoming superstars, too. Michael Smith redecorated the Oval Office and the East Wing of the White House in 2009, but his most famous commission to date turned out to be his $1.2 million face-lift of the Manhattan waterfront office of John Thain, then the CEO of Merrill Lynch. The job made headlines in 2009, when Bank of America, which had rescued a struggling Merrill, got a $45 billion bailout from the U.S. government. Suddenly, Smith’s $800,000 fee, and some of his big-ticket purchases, including $87,000 for “area rugs” and a $35,000 antique commode—paid for by the company—became symbols of plutocratic excess. Those infamous furnishings are also an example of how the emergence of the global plutocracy is creating a class of superstar artists and professionals, the best and luckiest of whom can become plutocrats in their own right.

Another pair of winners is Candy and Candy, the London-based brothers whose opulent interior design business expanded into property development. Property is the ultimate local good, but it has also allowed Candy and Candy to surf the waves of the twin gilded ages. Candy and Candy’s star 2011 venture turned out to be a play on the rise of the global super-elite. The list of buyers at One Hyde Park, a 385,000-square-foot apartment building next to the Mandarin Oriental and overlooking London’s Hyde Park, is a better directory to the international plutocracy than the fat, bricklike facebooks distributed each year to attendees at the World Economic Forum. The biggest place, occupying some twenty-five thousand square feet, was sold for $223 million to Rinat Akhmetov, a coal and metallurgy oligarch from eastern Ukraine. Other buyers include Vladimir Kim, who made his money in Kazakh copper; Sheikh Hamad bin Jassim bin Jabr al-Thani, the prime minister of Qatar; Irish property developer Ray Grehan; and Russian real estate magnates Kirill Pisarev and Yuri Zhukov.

Candy and Candy are an example of how the twin gilded ages—the rise of the plutocrats not just in the West but also in the emerging markets—has expanded the market for the superstars who work for them and thus driven up the prices those at the very top of their professions can command. You can see the power of globalization in the divergent careers of two North American architects born just twenty years apart. Gordon Bunshaft was born in 1909 in Buffalo, New York. Frank Gehry was born in 1929, a hundred miles to the north, in Toronto. Both had Eastern European roots—Bunshaft’s parents were Russian Jews; Gehry’s people were Polish Jews. Both went on to win the Pritzker Prize, architecture’s highest honor. But you have almost certainly heard of Gehry, and you probably haven’t heard of Bunshaft. The difference is the emerging markets and their first gilded age.

Bunshaft’s signature construction is Lever House, a clean-lined modernist rectangle that presides over Park Avenue just across the street from the Four Seasons restaurant, the lunchtime canteen of Manhattan’s princes of finance. The architect designed just a few buildings outside North America, and one of those, the National Commercial Bank in Jeddah, was built at the very end of his career, in 1983, when he was seventy-four. Globalization had arrived, but too late to make much of a difference to Gordon Bunshaft. But for Gehry, who began his work just twenty years later, globalization was the making of his career. His first foreign commission, the Vitra Design Museum in Germany, was in 1989, six years after Bunshaft’s big international gig. But what was a nightcap for Bunshaft was the main course for Gehry. Since 1989, half of his work has been outside the United States, including landmark buildings like the Guggenheim Museum in Bilbao.

Gehry is more than an architect—he is a starchitect, a neologism coined to describe the small band of elite international architects whose personal brands transcend their buildings. He has appeared in Apple’s iconic black-and-white “Think Different” ad campaign, parodied himself on the Simpsons, and helped Arthur and his friends build a tree house on the children’s cartoon. He has even designed a hat for Lady Gaga. The difference between Bunshaft, an award-winning North American architect, and Gehry, a multimillionaire global starchitect, is the difference between living in the postwar era of the Great Compression, when the gap between the 1 percent and everyone else shrank, and living during the twin gilded ages, when globalization and the technology revolution are creating an international plutocracy and therefore a fantastically wealthy global clientele for superstars like Gehry.

Here’s how Eric Schmidt, then the CEO of Google, explained the impact of the global plutocracy on the prices of luxury goods, and on the fortunes of those who produce and sell them. “I’m a pilot, so I understand airplane economics very well. For a while, high-end private air jets went up 50 to 80 percent higher than they should be by any modeling, because the Russians all entered the market,” he recalled. “In these wealth markets, the numbers are small enough that you can watch the real economics. You know, there’s three bidders for one property kind of thing…. In California ten years ago, during the bubble, there was a specific street in Atherton where all the prices doubled because of a set of offers that a number of executives who no longer live in the Bay Area made. They had so much discretionary income at the time, and they needed a house, so boom, right?”

If you understand the economic cycles of the plutocrats, Schmidt explained, you can become pretty rich yourself: “There’s another IPO cycle going to happen off companies like Facebook. And those companies are predominantly headquartered in a number of cities. Those cities will have scarcity of some things those people, newly arrived, need. The first thing you need is a house, okay? So, if you want to make some short-term money, buy the assets that will be bid up by the people when they get their money six months after the IPO.

“There’s obviously negative consequences from all of this. I’m not endorsing it. I’m just trying to describe it.”

Already in the nineteenth century, Marshall noticed that the rising tide of prosperity wasn’t lifting the boats of all artists and professionals—those “moderately good oil paintings” had never been as cheap while the “first-rate” ones had never “sold so dearly.” More than a century later that winner-take-all effect has become even more pronounced in the professions whose superstars are prospering from the rise of the global super-elite.

A good example is the law. In 1950, the median salary for American lawyers working in private practice was $50,000 in today’s dollars. Lawyers working at firms with nine or more partners enjoyed a median income of around $200,000 in today’s dollars.

By today’s standards, though, that differential feels practically socialist. In 2011, the highest-paid partners at America’s top firms earned more than $10 million a year; the average salary of a partner in a law firm was $640,000. A similar chasm is opening between partners within firms. In the 1950s, the highest-paid partner at a Wall Street law firm earned double, or maybe triple, what his lowest-paid partner earned, and the main difference was seniority. In 2011, America’s most aggressively expanding law firms paid their stars ten times what the average partner earned.

That is just the gap between partners within a single elite firm. The difference between star partners and those lower down in the legal profession has become a chasm. In 2011, a year when top partner paydays exceeded $10 million and more than one hundred U.S. lawyers were on the record as charging more than $1,000 an hour (David Boies’s hourly rate is reportedly more than $1,220), the average starting salary for a law school graduate was $84,111 and the average lawyer earned $130,490. This trend is increasing. More and more law firms are adopting a “permanent associate” or domestic outsourcing model, in which they employ experienced lawyers at associate pay rates in non-partner-track jobs. (One firm, DLA Piper, will bill its domestic outsourced lawyers at around a hundred dollars an hour.)

Part of what is going on is the economics of the plutonomy. As the global super-elite pulls ahead of everyone else, the demand for luxury services is exceeding the demand for low-rent ones. This, remember, was the investing thesis the Citigroup inventors of “plutonomics” devised—Gucci is doing better than Walmart; outstanding oil paintings are appreciating in value more quickly than moderately good ones; the demand for David Boies is outstripping the demand for associates.

And even as the growing demand for high-end services is putting a premium on legal superstars, some of the other forces at work in the twenty-first-century economy are pushing down the incomes of those in the middle.

Technology helps David Boies—mostly by making his clients richer. But it is driving down the incomes of more junior lawyers and creating less of a demand for their services as law firms discover ways to computerize work that was once done by well-paid lawyers. The most advanced example of this trend is e-discovery. In 2010, DLA Piper faced a court-imposed deadline of searching through 570,000 documents in one week. The firm, which has expanded from its Baltimore base to become the biggest law firm in the world, hired Clearwell, a Silicon Valley e-discovery company. Clearwell’s software did the job in two days. DLA Piper lawyers spent one day going through the results. After three days of work, the firm responded to the judge’s order with 3,070 documents. A decade ago, DLA Piper would have employed thirty associates full-time for six months to do that work. (Meanwhile, DLA Piper, one of the law firms with a nine-to-one differential between its star partners and the rest, in 2011 poached Jamie Wareham, a high-profile Washington lawyer, partly thanks to compensation of reportedly about $5 million in his first year.)

Globalization is having a similar, two-speed impact on lawyers. For the superstars, it is one of the forces creating richer clients, bigger cases, and fatter fees. But at the bottom, cheaper emerging market lawyers are undercutting the salaries of Western lawyers, just as outsourcing has brought down costs—and wages—in manufacturing and more routine services like call center work. One example is Pangea3, an Indian legal process outsourcing firm, which recently opened offices in the United States. Employing hundreds of lawyers who work around-the-clock shifts, Pangea3 does basic, repetitive legal work like drafting contracts and reviewing documents. Its clients have included blue-chip companies like American Express, GE, Sony, Yahoo!, and Netflix. This is “Manhattan work at Mumbai prices,” as the American Bar Association Journal put it in a recent headline.

In the age of the global super-elite, even dentists can be superstars. That’s the only way to describe Bernard Touati, the Moroccan-born French dentist who has parlayed fixing the teeth of the plutocrats, starting with the Russian oligarchs, into a superstar career of his own. Roman Abramovich, the Siberian oil oligarch, paid Touati to fly regularly to Moscow to fix his teeth—and installed a dentist’s chair in his office specially for the job. Dr. Touati treated Mikhail Khodorkovsky, once Russia’s richest man before Putin sent him to Siberia, and he brightens the smiles of oligarchs’ wives, like oil and banking baron Mikhail Fridman’s. He treats the Western super-elite, too—New York–based designer Diane von Furstenberg is a patient, as is Madonna.

Touati’s super-rich patient list is an example of how, thanks to the Marshall effect, the plutonomy is a self-sustaining global economy, largely insulated from the rest of us. Russian oligarchs create a superstar French dentist; Wall Street bankers and Arab sheikhs, superstar interior designers. Whether your skill is tooth enamel or fabric swatches, if you make it into the superstar league you can benefit from the concentration of wealth in the hands of a small, global business elite. And whether you got your start in western Siberia or the American Midwest, once you join the super-elite you patronize the same dentist, interior designer, art curator. That’s how, from the inside, the plutonomy becomes a cozy global village.

SHERWIN ROSEN IS VINDICATED, TOO

Providing superstar services to the plutocrats is one way to join them. But an even more powerful driver of twenty-first-century superstar economics is the way that globalization and technology have allowed some superstars—the Mrs. Billingtons—to achieve global scale and earn the commensurate global fortunes. This is the superstar effect that Sherwin Rosen was most interested in, and it is both the most visible and the easiest to understand. These superstars are the direct beneficiaries of the twin gilded ages.

Thanks to the Internet, Lady Gaga reaches hundreds of millions more listeners than Mrs. Billington did. Her 2011 single “Born This Way” sold one million copies in five days. In 2011, when Lady Gaga topped the Forbes Celebrity 100 list, she had sold some twenty-three million albums and sixty-four million singles worldwide. Between May 2010 and May 2011, she performed in 137 shows in twenty-five countries, earning $170 million in gross revenue. Forbes estimated Lady Gaga’s 2010 earnings at $90 million, over eighteen hundred times the typical U.S. family income. Mrs. Billington’s fabulous £10,000 income in 1801—a fee so extravagant that Alfred Marshall used it as shorthand for superstar remuneration nearly a century later—was two hundred times the average British farm laborer’s income at the time.

There isn’t much mystery to why Lady Gaga is worth four Mrs. Billingtons. Each one was the leading diva of her time, and each one had an international reputation. But the only way to listen to Mrs. Billington was in person; Lady Gaga can be heard and seen by anyone with an Internet connection. Technology and globalization have given Lady Gaga access to a much bigger audience and she is consequently a much bigger star.

Superstar actors and athletes are beneficiaries of the same forces. In his own lifetime, Charlie Chaplin went from the physical stage to the silver screen and his earnings accordingly multiplied a thousandfold. But he was underpaid compared to today’s movie stars. Contrast Chaplin’s $670,000 income in 1916–1917 with Leonardo DiCaprio’s $77 million payday in 2010–2011—adjusted for inflation, DiCaprio earned six times as much. Economies of scale have similarly enriched sports stars. Mickey Mantle, the New York Yankees star hitter, earned about $100,000 a season in the mid-1960s. Compare that with Alex Rodriguez, the Yankees star fifty years later, who made $30 million in 2012. Adjusted for inflation, Rodriguez’s earnings are fifty times more than Mantle’s. The gap between the superstars and the rank and file has increased, too. Mantle earned less than five times the baseball average; Rodriguez earns ten times more than the average major leaguer.

What is particularly striking about these Rosen superstars is that they have become richer even as the Internet has weakened the businesses that once supported them. Singers like Lady Gaga have never done better, yet the music business has been eviscerated by the Internet. Movie studios have also been weakened even as their stars do better than ever. Athletes can earn millions while their teams go broke.

Superstars have stayed on top partly by cashing in on their technology-driven celebrity with lucrative in-person performances. Lady Gaga earns much of her income from her live acts. The same is true of the other best-paid singers of 2011—U2, Bon Jovi, Elton John, and Paul McCartney. All of them earned more than $65 million, and all of them depended heavily on the revenues from live shows.

What we are seeing is the Rosen effect and the Marshall effect enhancing each other. Cheap and effective communication has allowed a few performers to achieve global celebrity more quickly and at a greater scale than ever. At twenty-five, Lady Gaga had sold sixty-four million singles around the world. But she had a slow start compared to Justin Bieber, who, at sixteen, produced a video that has now been viewed nearly 750 million times.

The paradox is that much of the technology that has made Justin Bieber and Lady Gaga famous doesn’t make them rich. In 2012, the most powerful way the two stars connected with their fans was Twitter—Lady Gaga had more than twenty-five million followers (known as “little monsters”); Bieber had more than twenty-three million “Beliebers.” Those tweets don’t make money, but they create an audience for the live acts, which do.

And those in-person performances are an example of the Marshall effect on a global scale. Just as an England that was growing rich could afford to fund lavish productions at Drury Lane and Covent Garden—and a competition between the two that drove up Mrs. Billington’s fees—so the rising middle class in emerging markets and the rising global super-elite are creating affluent audiences for today’s celebrity performers. Global scale is essential to the economics of today’s superstars: in 2010, Lady Gaga performed in twenty-nine countries, U2 in fifteen, Elton John in sixteen, and Bon Jovi in fifteen. We may think of these musicians as products of mass culture, but their shows are elite events. The average ticket price at Lady Gaga’s Born This Way show was more than a hundred dollars.

In a study of concert ticket prices, economist Alan Krueger found that in the two decades between 1982 and 2003, a time when first music videos, especially as celebrated on MTV, and then digital sharing technology, as pioneered by Napster, extended the reach of top performers, the share of concert revenue taken by the top 5 percent of entertainers increased by more than 20 percent, from 62 percent to 84 percent. The top 1 percent did even better: their share more than doubled, from 26 percent in 1981 to 56 percent in 2003. (By contrast, the top 1 percent in the United States overall earned 14.6 percent of the income in 1998.)

More intimate deals with the billionaire class are a smaller, but significant, source of income for superstar performers. Arkady, a Russian businessman in his thirties, reportedly paid Lady Gaga $1 million to appear in her “Alejandro” music video. And even stars a little past their prime can earn fat fees for personal appearances for the plutocrats. This seems to have become the standard for the big birthdays of private equity chiefs, their equivalent of baking themselves a homemade birthday cake. In 2011, Leon Black, the founder of private equity group Apollo, celebrated his sixtieth with a birthday bash that included a million-dollar performance by Elton John. (In a global economy, these gigs can sometimes go badly wrong, as Hilary Swank discovered when she agreed to attend Chechen strongman Ramzan Kadyrov’s thirty-fifth birthday celebrations in Grozny, in exchange for a six-figure fee. She was roundly—and rightly—denounced for sharing a stage with a warlord notorious for torturing and killing his opponents.)

The people a previous generation might have called public intellectuals also make much of their living by leveraging the Rosen effects of mass popularity and the Marshall effect of earning lavish fees from a plutocracy that can afford to pay them. Malcolm Gladwell, the world’s most influential business writer, is an example. He is paid millions to write books. But he makes almost as much—and with less effort—by giving $100,000 speeches. Groups he has addressed include a gathering of Blackstone’s investors and the Pebble Beach legal conference, the Davos of the world’s top lawyers.

You don’t have to top the bestseller list to profit from the super-elite speaking circuit. Charlie Cook, the political analyst and editor of the twenty-eight-year-old Cook Political Report, “subsidizes” his journalism by giving speeches, an activity he says is “very, very lucrative,” even if it can be wearing to “haul my tired old ass to three different cities a week.”

This interplay of Marshall effects—in-person performances for a society growing more affluent—and Rosen effects—the power of technology-driven scale—is creating a superstar effect beyond what we are accustomed to thinking of as the performing arts.

Consider chefs. The rise of culinary superstars is certainly an example of Alfred Marshall’s trickle-down wealth from the super-rich. Plutocrats not only insist on the best barristers and the finest jockeys, they also want to dine at El Bulli, with its €250 prix fixe meal (or at whatever its successor turns out to be). And there, you might think, it would end. After all, preparing a delicious meal, like arguing a case in court or riding a racehorse, is a hands-on service, which can’t be duplicated and scaled in the way an aria or a drama can be.

For superstars, that is a serious financial constraint. One way celebrity chefs are getting around it is by transforming their trade from a high-end personal service to a scalable mass performance. Thus Mario Batali first shot to fame as the iconoclastic founder of Po, a trattoria in New York’s West Village. But, even at fifteen dollars a plate, physical dishes of ravioli can get you only so far. Batali became a real superstar in 1997, when he signed a contract to host his own show, Molto Mario, on the Food Network. The celebrity his TV show created not only sent diners to his restaurants, it allowed Batali to go mass retail—as the author of bestselling books, producer of his own line of pasta sauces, co-owner of a vineyard, and partner in Eataly, an Italian grocer, wine store, and cluster of restaurants kitty-corner to Manhattan’s Flatiron building. If your client is visible enough, superstar cooks don’t even need a retail presence to build a second career in the mass media. Consider Art Smith, Oprah Winfrey’s personal chef until 2007, who used that pedigree to publish three cookbooks, open three restaurants, and rent out his celebrity by writing menus for other restaurants.

You could observe how superstar chefs benefit from both the Rosen and Marshall effects at a super-elite meal hosted by consulting firm Booz Allen Hamilton on a balmy late June evening in 2011 at the Aspen Meadows Resort during the Aspen Ideas Festival, where this evening’s guests included, among others, Alan Greenspan. “Curated” food has become part of the super-elite lifestyle, so for this “meal of a lifetime” the consulting firm flew in Craig Stoll, co-owner and cofounder of the Delfina group of restaurants in San Francisco, so he and his kitchen staff could prepare supper. Each course was “narrated” by Corby Kummer, a senior editor and food writer at the Atlantic.

There was a lot to narrate. The second course, for example, of Berkshire pork arista with butter beans and grappa-preserved cherries, was served along with the commentary that the meat came from Niman Ranch, a producer so ethical its founder’s third wife was a vegetarian and author of a book called Righteous Porkchop. The sour cherries had been purchased at the San Francisco farmers’ market by Delfina staff—a particular coup because this year’s crop was poor and cherries were therefore scarce—and marinated in homemade Delfina grappa. The blackberries and raspberries served for dessert—the frutti del bosco accompanying Delfina’s buttermilk panna cotta—had traveled from San Francisco to Aspen that morning in carry-on bags packed into the overhead luggage compartments by Delfina staff. Most of the diners applauded when Kummer related that final detail.

Cooking a private meal for Booz Allen Hamilton and its guests is one way that Craig Stoll and superstar chefs like him benefit from the broader rise of the super-elite. But in an aside that revealed the extent to which simultaneously catering to a mass audience has become part of the everyday menu for celebrity cooks, Kummer concluded the meal by informing the replete audience that Stoll hadn’t written a book “yet,” so as a keepsake of their meal of a lifetime—their ricordi del soggiorno—the diners would have to make do with colorful Italian-style ceramic plates depicting a signature Delfina seafood dish. The wife of one diner, who was traveling back to Westchester the following morning on a private plane (“wheels up at nine a.m.”), was persuaded to take the dish home when she was shown that Delfina had encased it in bubble wrap and packed it tightly in an air-travel-friendly cardboard box.

Cooks are just the latest tradesmen to understand that the most powerful way to cash in on superstar talent is to strike the right combination of very expensive personal service for the elite with a cheaper, mass-produced version. Tailors probably got there first. Their first revolutionary was Charles Frederick Worth. An Englishman who moved to Paris in 1825, Worth was the Elizabeth Billington of the clothing industry—a superstar who cashed in on the emergence in the nineteenth century of a super-rich European elite. To do that, Worth had to invent a new profession. Born in 1826, Worth started out in London and then in France as a draper. He saw an opportunity to expand the business by sewing clothes for his clients, not just selling them fabrics. Worth persuaded his initially hesitant employers to back his idea, and they opened a small dressmaking department. It became increasingly profitable, and Worth was made a partner in the firm. That success emboldened him to set up his own venture in 1858, financed by Otto Gustav Bobergh, a Swedish investor. Before long Worth had created a new superstar profession—haute couture—and become its first practitioner.

Worth sewed his label into his dresses. Rather than sewing clothes created by his clients, he invented modern fashion design by presenting his own styles four times a year, then custom producing them for his clients. Worth was an avid adopter of technology. The first reliable sewing machine was patented in Boston by Isaac Singer in 1851, seven years before Worth opened his dressmaking shop, and his seamstresses used sewing machines wherever that was quicker and more efficient than stitching by hand. Worth also enthusiastically used factory-made decorations such as ribbons and lace.

Worth made his name by assiduously courting the European aristocracy. An early client was Princess Pauline von Metternich, wife of Austria’s ambassador to France, and his success was assured when Empress Eugénie, wife of Napoléon III, began to wear his designs. But financially he was as much a beneficiary of America’s Gilded Age as were the Astors, Carnegies, and Vanderbilts, who sent their ladies to Paris to order their entire wardrobes. They would also make the transatlantic journey to buy dresses for special occasions, such as weddings or the lavish masquerades that, as with the 1897 Bradley Martin ball, were a fixture of late nineteenth-century elite social life.

Worth was more than a superstar tradesman. He was an innovator who created a new way of making and selling clothes to the rising European and American super-rich. In the 1870s, at the peak of his career, he was making $80,000 a year; some of his dresses sold for as high as $10,000. That was a fortune, to be sure. But just as Mrs. Billington’s earnings were limited by the number of people who could hear her perform in person, the six thousand to seven thousand gowns the House of Worth produced a year were each tailored to the body of a specific client.

But just as Charlie Chaplin’s superstardom dwarfed Elizabeth Billington’s since he could perform for the masses, fashion designers became exponentially richer when they expanded from the haute couture business to prêt-à-porter. That revolution happened in 1966, when Yves Saint Laurent opened his first Rive Gauche ready-to-wear store on the rue de Tournon in the sixth arrondissement of Paris, less than two miles away from the original home of Worth and Bobergh, where Charles Worth had gone into business just over a century earlier.

It took the couturiers a long time to reap the benefits of mass production. That was partly because the sewing machine didn’t immediately translate into well-made and cheap clothes for women—the most lucrative designer market. The sizzle of mid-nineteenth-century inventive genius devoted to the sewing machine—call it the smartphone of the 1850s—almost immediately translated into mass production of military uniforms, for the U.S. Civil War and Europe’s Franco-Prussian War.

But factory-produced women’s clothes remained a difficult value proposition. As late as 1920, a study found that it was still cheaper to sew a dress at home, for an average cost of twenty dollars, than to buy it ready-made, for an average cost of thirty dollars; buying a dress from a dressmaker, at thirty-five dollars, was the priciest of all. This was partly because the big technology advance in clothes production—the sewing machine—could be used almost as effectively at home as it could be in the sweatshops of the garment district. As long as your wife’s or daughter’s labor was cheaper than that of an immigrant in midtown Manhattan, and for very many Americans it was, buying ready-made clothes was a luxury—hence Laura Ingalls Wilder’s remembered envy in Little House on the Prairie of the wealthy classmate who could afford a “store-bought” dress. The second problem was fit, more of an issue for women’s clothes than for men’s because they were often tighter and subject to more quickly changing styles.

We still don’t have a perfect answer to the problem of fit—female readers can sigh here—but there was a tipping point in 1941, when, in a government project funded to employ casualties of the Great Depression, the U.S. Department of Agriculture measured almost fifteen thousand women and published the results, creating the first standard dress sizes. Industrial sewing technology made advances, too, and by the 1950s a factory-made dress could be produced in a fraction of the time it took for a lone seamstress using a sewing machine.

As Yves Saint Laurent realized, these two innovations made it possible for superstar designers to benefit from economies of scale. More than acting or singing or cooking, modern fashion design (as opposed to mere dressmaking) was invented as a very expensive service for the Gilded Age elite. Saint Laurent understood that his move into ready-to-wear was a break with that paradigm, and he sought to make his populism a virtue. Fashion, he liked to say, would be incredibly upset if its sole purpose was to dress rich women. (Note, however: the first highly visible client of Rive Gauche, the YSL ready-to-wear was Catherine Deneuve. And in 1987, a few days after the Black Monday stock market crash, the collection included a $100,000 jeweled jacket.)

Many of Saint Laurent’s fellow elite couturiers were horrified. Emanuel Ungaro wrote that the opening of Rive Gauche saddened him greatly. Pierre Cardin, who had experimented with, then abandoned his own foray into, ready-to-wear a year earlier, warned that by leveling and standardizing, we are going to fabricate a world where “we will die of boredom.”

Before long, however, it became clear that by producing both an haute couture line and a prêt-à-porter line—offering very costly personal service to the super-rich, and using technology to scale their talent—the fashion designers at the very height of their profession could benefit from both Marshall and Rosen effects. In 1975, Yves Saint Laurent earned $25 million, a hundred times what Charles Worth earned at the peak of his career (when taking inflation into account). Worth was richer than his French seamstresses; YSL, however, is a veritable plutocrat compared with the foreign garment workers who sewed his prêt-à-porter line. As in the law, the performing arts, and cooking, in fashion the chasm between the superstars and everyone else is only getting bigger.

THE MARTIN EFFECT—TALENT VS. CAPITAL

The Marshall superstars and the Rosen superstars—and those who benefit from both effects—are getting richer in two ways.

The first is because they are being served from a larger pie—their super-rich clients are richer than ever, and economies of scale now allow them to reach a mass audience. The second is because they are getting a bigger share of the pie relative to their less elite peers (whether those peers are any less talented is open to debate). Their clients—both the super-rich and the masses—prefer to listen to the “very best” singer, and wear clothes created by the “very best” designer. Even where the service can’t be scaled—as in a courtroom appearance or an original painting—the same force is at work.

Roger Martin, a management consultant and business school dean, thinks that over the past three decades another force has come into play: superstars aren’t just earning more from their clients, they are increasingly able to extract a greater amount of the value of their work from their employers. In Martin’s view, this dynamic, which he describes as the struggle between talent and capital, is tilted in favor of the “talent,” or the superstars. Just as the fight between labor and capital defined the first stage of industrial capitalism in the nineteenth and twentieth centuries, Martin argues that the battle between capital and talent is the central tension in the knowledge-based postindustrial capitalism of the twenty-first century.

Here is how Martin laid out his theory in the Harvard Business Review: “For much of the twentieth century, labor and capital fought violently for control of the industrialized economy and, in many countries, control of the government and society as well. Now… a fresh conflict has erupted. Capital and talent are falling out, this time over the profits from the knowledge economy. While business won a resounding victory over the trade unions in the previous century, it may not be as easy for shareholders to stop the knowledge worker–led revolution in business.”

Martin’s thesis helps explain one of the most striking contrasts between today’s super-elite and their Gilded Age equivalents: the rise, today, of the “working rich.” As Emmanuel Saez found, the wealthiest Americans these days are getting most of their income from work—almost two-thirds—compared to a fraction of that, roughly one-fifth, a century ago.

Martin’s theory about the growing power of “the talent” builds on the ideas of Peter Drucker, the Austrian-born scholar who laid the intellectual foundations for the academic study of management. That means you can probably blame Drucker for far too many soul-destroying PowerPoint presentations, peppy but hollow business books, and inspirational corporate “coaches” with lots of energy but no message. But Drucker also, more than half a century ago, predicted the shift to what he dubbed a “knowledge economy” and, with it, the rise of the “knowledge worker.”

Drucker made his name in America, but he was a product of the Viennese intellectual tradition—Joseph Schumpeter was a family friend and frequent guest during his boyhood—of looking for the big, underlying social and economic forces and trying to spot the moments when they changed. Accordingly, he saw the emerging knowledge worker as both the product and beneficiary of a profound shift in how capitalism operated. “In the knowledge society the employees—that is, knowledge workers—own the tools of production,” Drucker wrote in a 1994 essay in the Atlantic. That, he argued, was a huge shift and one that would, for the first time since the industrial revolution, shift the balance of economic power toward workers—or, rather, toward one very smart, highly educated group of them—and away from capital.

As Drucker explained: “Marx’s great insight was that the factory worker does not and cannot own the tools of production, and therefore is ‘alienated.’ There was no way, Marx pointed out, for the worker to own the steam engine and to be able to take it with him when moving from one job to another. The capitalist had to own the steam engine and control it.” Hence the power of the robber barons and the complaints of the proletariat.

But that logic collapses in the knowledge economy: “Increasingly, the true investment in the knowledge society is not in machines and tools but in the knowledge of the knowledge worker…. The market researcher needs a computer. But increasingly this is the researcher’s own personal computer, and it goes along where he or she goes…. In the knowledge society the most probable assumption for organizations… is that they need knowledge workers far more than knowledge workers need them.”

Here, then, is another way that some of the highly talented are catapulted into the super-elite: when it becomes possible for them to practice their profession independently. Or, to put it another way, when the tool of their trade is a personal computer, rather than a steam engine.

Of course, even during the first machine-driven thrust of the industrial revolution, there were some superstars who remained beyond the thrall of the capitalists. A painter needed only oil and canvas; a lawyer needed only his education, wits, and admission to the bar. It is no accident that it was the superstars of these two professions that Marshall, writing in 1890, singled out as benefiting disproportionately from the Western world’s economic transformation.

In the knowledge economy, more and more professions use a laptop rather than a steam engine, and that means that the superstars in these fields are earning ever greater rewards. The intellectuals are on the road to class power.

THE STREET AND THE SUPERSTARS

The biggest winners are the bankers. They did well enough, to be sure, in the industrial revolution. They were among that era’s plutocrats—think J. P. Morgan in New York, or Siegmund Warburg in the City of London. But these were the owners of capital. Their employees, the salaried financial professionals, weren’t nearly as richly rewarded. Their job was just to keep score.

In the postwar era, with the steady rise of the knowledge economy, the bankers’ role has been dramatically transformed. Instead of working for the owners of capital—whether they are industrial magnates or the shareholders of publicly traded companies—financiers have discovered they can themselves own the capital and, with it, the companies. Critically, this shift from wage earner to owner has been accomplished not just by one or two stars at the very top of the field—the Oprah Winfreys or the Lady Gagas—but by thousands. In 2012, of the 1,226 people on the Forbes billionaires list, 77 were financiers and 143 were investors. Of the forty thousand Americans with investable assets of more than $30 million, a group described by Merrill Lynch, which produces the premier annual study of the wealthy, as “ultra high net worth individuals,” 40 percent were in finance. Of the 0.1 percent of Americans at the top of the income distribution in 2004, 18 percent were financiers. Bankers are even more dominant at the very tip of the income pyramid. In a study of the 0.01 percent, Steven Kaplan and Joshua Rauh found Wall Street significantly outearned Main Street. Collectively, the executives at publicly traded Wall Street firms earned more than the executives of nonfinancial companies. Wall Street investors, such as hedge fund managers or private equity chiefs, did even better. “In 2004,” Kaplan and Rauh write, “nine times as many Wall Street investors earned in excess of $100 million as public company CEOs. In fact, the top twenty-five hedge fund managers combined appeared to have earned more than all five hundred S&P 500 CEOs combined.”

You can trace this transformation of bankers from accountants and clerks to the dominant tribe in the plutocracy to three new forms of finance pioneered in the decade after the Second World War, and to three very different men who lived within five hundred miles of one another on the East Coast stretch running from Boston to Baltimore.

The first was Alfred Winslow Jones, a patrician New Yorker (his father ran GE in Australia), who invented the modern hedge fund in 1949 when, as a forty-eight-year-old journalist with two children and two homes, he decided he needed to make more money. The second was Georges Doriot, a French-born Harvard Business School professor who invented the modern venture capital business in 1946 as a way to encourage private investment in start-ups founded by returning GIs. The third was Victor Posner, the teenage school dropout son of a Baltimore grocer who pioneered the hostile takeover business (now usually known by the more genteel name of “private equity”) in the 1950s.

Together, this trio spearheaded the transformation of finance from an industry dominated by large institutions whose job was the conservative stewardship of other people’s money into a sector whose moguls were iconoclastic entrepreneurs who specialized in risk, leverage, and outsize returns. The broader economic impact of this revolution remains debatable—you could argue that these three men are the fathers of the instability of modern financial capitalism—but it was clearly crucial in the rise of the super-elite. Hedge funds, venture capital, and private equity transformed finance—previously the dependable plumbing of the capitalist economy—into an innovative frontier where smart and lucky individuals could earn nearly instant fortunes.

The biggest beneficiaries are those who strike out on their own. And the would-be masters of the universe know that. David Rubenstein, the billionaire cofounder of the Carlyle Group, one of the world’s biggest private equity firms, told me that when he visited America’s top business schools during their spring recruiting season in 2011, he discovered that everyone wants to be an entrepreneur. “When I graduated from college, you wanted to work for IBM or GE,” he told me.” Now when I talk to people graduating from business school, they want to start their own company. Everyone wants to be Mark Zuckerberg; no one wants to be a corporate CEO. They want to be entrepreneurs and make their own great wealth.” That quest starts earlier and earlier. Jones and Doriot were both nearly fifty when they started their businesses. Nowadays, would-be plutocrats want to be well on their way to their fortune by their thirtieth birthday.

THE BILLIONAIRE’S CIRCLE

But the real mass revolution sparked by the rise of entrepreneurial finance is in the way that it reshaped the big institutions it threatened to usurp. Civilians—which is to say anyone who doesn’t work on Wall Street (or maybe in Silicon Valley)—tend to think of the $68 million earned by Lloyd Blankfein in 2007, just before the crash, or the $100 million bonus earned by Andrew Hall, Citigroup’s star energy trader, in 2008—as princely fortunes. On the Street itself, though, even the most successful and lavishly compensated employees of the publicly listed firms see themselves as also-rans compared to the principals of hedge funds, venture capital firms, and private equity companies.

We got a glimpse of that way of thinking when federal agents were allowed to tap the telephones of Raj Rajaratnam, a billionaire hedge fund founder, and his network of contacts. In one of those conversations, Rajaratnam and Anil Kumar discuss their mutual friend Rajat Gupta, the Indian-born former head of McKinsey, a company that epitomizes the rise of the managerial aristocracy. Gupta was on the board of Goldman Sachs, one of the most prestigious in the world. But he had been invited to the board of KKR, one of the four biggest private equity firms. Serving on both would be a “perceived conflict of interest,” because KKR and Goldman often compete for the same business. That left Gupta with a tough decision, but he was leaning toward KKR. Here, according to Rajaratnam, is why: “My analysis of the situation is he’s enamored with Kravis [one of the three founders of KKR] and I think he wants to be in that circle. That’s a billionaire’s circle, right? Goldman is like the hundreds of millionaires’ circle, right? And I think here he sees the opportunity to make $100 million over the next five or ten years without doing a lot of work.”

That phrase—the billionaire’s circle—is the key to how the entrepreneurs of finance transformed the wider culture of Wall Street, and thus of the global banking business. Thanks to Jones, Doriot, and Posner, being in the “hundreds of millions” circle isn’t enough. To understand how that sentiment has ratcheted up individual compensation for Wall Street’s salarymen—not just the entrepreneurs who take the risk of going it alone—consider this fact: in 2011, 42 percent of Goldman Sachs’s revenues were spent paying its employees, who earned an average of $367,057. Nor is that princely compensation restricted to the über-bankers at Goldman Sachs. At Morgan Stanley, which made a $4 billion mistake on the eve of the financial crisis and whose recovery from it has been lackluster, compensation accounted for 51 percent of revenue in 2010. At Barclays, which now owns Lehman, the figure was 34 percent; at Credit Suisse, it was 44 percent. To put it another way, on Wall Street, in the battle between talent and capital, it is the talent that is winning. Wall Street is the mother church of capitalism. But its flagship firms are run like Yugoslav workers’ collectives.

THE MATTHEW EFFECT

Matthew of Capernaum was a Galilean tax collector and the son of a tax collector. He became one of Jesus Christ’s apostles, the patron saint of bankers—and one of the first thinkers about superstars. What he noticed was the ratchet effect of superstardom: “For unto every one that hath shall be given, and he shall have abundance; but from him that hath not shall be taken away even that which he hath.”

The Marshall effect, the Rosen effect, and the Martin effect are all about the ways in which superstars are able to be better paid for the value they create—thanks to richer clients (Marshall), more clients (Rosen), and better terms of trade with their financial backers (Martin). The multiplier effect that Saint Matthew observed is what makes all these drivers of superstardom so powerful: the superstar phenomenon feeds on itself.

We are all familiar with the Matthew effect in pop culture, where it is so apparent that it seems as inevitable and unremarkable as gravity. Celebrities are famous for being famous. And fame is its own achievement and currency. One reason we know that is because of fame production machines, like reality TV shows, and the intense popular desire to participate in them. (In Philadelphia in August 2007, twenty thousand people competed for twenty-nine spots on American Idol, a far tougher ratio than being admitted to Harvard.)

Here’s what might surprise you: The intrinsic power of superstardom—making an impact because of who you are, not what you do—operates not only in the skin-deep world of entertainment. It also applies to what we like to think of as the empirical universe of science. In fact, the term “Matthew effect” was coined by sociologist Robert Merton to describe how prestigious awards, in particular the Nobel Prize, influenced the perception of scientific work. Merton discovered that science had its own superstars, and that those stars’ discoveries were considered more important or original just because of who had made them.

Merton found that scientists who published frequently and worked at “major” universities gained more recognition than scientists who were equally productive but worked at lesser institutions. In cases where several researchers made the same discovery at roughly the same time, the more famous scientist was usually credited with the breakthrough while his or her unknown peer became “a footnote.” Writing more than four decades ago, Merton predicted that the superstar phenomenon would accelerate, partly because science was at the beginning of a shift from “little science,” with an investigator and a microscope, to “big science, with its expensive and often centralized equipment needed for research.” The superstars, he believed, would be the only ones to get the tools to do “big science,” giving them a further advantage relative to their less recognized peers.

What is striking about Merton’s scientific superstars is how conscious they are of the inequities of the celebrity from which they benefit. One Nobel Prize–winning physicist pointed out: “The world is peculiar in this matter of how it gives credit. It tends to give credit to [already] famous people.” A Nobel Prize–winning chemist admitted: “When people see my name on a paper, they are apt to remember it and not to remember the other names.” Another physics laureate went so far as to worry he was getting kudos for discoveries made by others: “I’m probably getting credit now, if I don’t watch myself, for things other people figured out. Because I’m notorious and when I say [something], people say: ‘Well, he’s the one that thought this out.’ Well, I may just be saying things that other people have thought out before.”

The scientist who best exemplifies the self-fulfilling power of fame is, ironically, the one most of us would immediately name as the twentieth century’s brightest example of pure intellectual genius: Albert Einstein. Einstein was indeed a groundbreaking physicist, whose theory of relativity ushered in the nuclear age and transformed the way we think about the material world. But why is he a household name, while Niels Bohr, who made important contributions to quantum mechanics and developed a model of atomic structure that remains valid today, or James Watson, one of the discoverers of the double helix structure of DNA, is not?

According to historian Marshall Missner, Einstein owes much of his power as one of the most influential men of the twentieth century less to his theoretical papers and more to the trip he made to the United States in April 1921 as part of a Zionist delegation led by Chaim Weizmann. Before the ship made landfall, Einstein was already known—and feared. His theory of relativity, first put forward in 1905, had been dramatically confirmed in 1919 by the observation of the deflection of light during the solar eclipse in May of that year. The discovery captured the American popular imagination, but not in a good way. The twenties were a fraught decade. The Bolsheviks were consolidating their power in the Soviet Union. Germany was struggling under the weight of punitive World War I reparations. The U.S. economy was still booming, but income inequality was higher than it had ever been and elites were frightened both of homegrown populist protesters and of revolutionary ideas crossing the Atlantic. It was also a time of intense xenophobia and mounting anti-Semitism.

In that climate, America’s arbiters of public opinion decided that Dr. Einstein and his theory of relativity were sinister and subversive. It became a truth universally acknowledged that only “twelve men” in the world understood the theory of relativity. Pundits worried that this small, foreign cabal could use its knowledge to bend space and time and to enter a “fourth dimension” and thereby achieve “world domination.” Even the New York Times warned of “the anti-democratic implications” of Einstein’s discovery: “The Declaration of Independence itself is outraged by the assertion that there is anything on earth, or in interstellar space, that can be understood by only the chosen few.”

Then came the Weizmann delegation. Zionism was growing in popularity among New York Jews, and thousands came to the pier to greet the visitors. But the press thought the crowds were Einstein groupies. The Washington Post reported there were “thousands at pier to greet Einstein.” The New York Times wrote that “thousands wait four hours to welcome theorist and his party to America.” Its interest piqued, the press pack descended on Einstein. Instead of the “haughty, aloof European looking down on boorish Americans” they had expected, he turned out to be a modest, likable guy who “smiled when his picture was taken, and produced amusing and quotable answers to their inane questions.” No longer a threat to the Declaration of Independence, “Professor Einstein,” the New York Times editorial page declared, “improves upon acquaintance.” The scribblers loved him, and they loved the frisson of overturning their readers’ expectations, and a scientific legend was born. From that moment on, a great deal of Einstein’s power in the world, particularly outside the lab, but also within it, was derived from his celebrity.

You can see the same power of accidental celebrity at work in other fields. One is bestselling fiction. Thanks to the inevitable mistakes in bestseller lists (in 2001 and 2002, 109 books that should have been on the New York Times bestseller list according to their sales were left off), Stanford Business School professor Alan Sorensen was able to show that for books of equal initial popularity, being left off the list—not getting the Nobel Prize, not enjoying Einstein’s superstar treatment on that 1921 visit to the United States—meant fewer subsequent sales.

The same is true of classical musicians. The most important contest for pianists is Belgium’s Queen Elisabeth Competition. Looking at eleven years of the competition, economists Victor Ginsburgh and Jan van Ours found that the top three players went on to become successful professional musicians. Less than half of the others were able to find work of any sort as musicians. But is that a reward for talent or for the celebrity of winning the competition? One clue that officially being named a superstar—winning the competition—had more value than pure talent was an unexpected discovery Ginsburgh and van Ours made when they studied the winners. Placing first, second, or third correlated closely with the randomly determined order in which contestants had competed. So, unless you believe that the random order of participating in the competition is linked to talent, the more obvious conclusion is that the music world celebrity brought by winning the Queen Elisabeth Competition, independent of how good you are, has a powerful effect on your professional success as a musician.

But what about the long tail? One of the promises of the Internet has been that it can weaken the Matthew effect: the Web has low barriers to entry, and we all start out equal online. Matthew Salganik and Duncan Watts tested that premise in 2005 on 12,207 Web-based participants. The research subjects were offered a menu of forty-eight songs. Some participants were shown the songs ranked by popularity in the research group and told how often each song had been downloaded. Others were shown the songs in random order. A separate group was shown the songs in a meek-shall-inherit-the-earth order—the least popular songs were presented as most popular and vice versa. The results largely confirmed Merton’s thesis: being presented as popular, whether that information was true or not, strongly increased a song’s subsequent popularity. The impact was strongest for the songs that were the “worst” as measured by the unmanipulated judgment of listeners. Nor was the effect absolute. Even when presented as the least popular in the “inverted” world, the best songs gradually climbed up the rankings. If you are very, very good, you can break into the superstar league, but it’s an uphill battle.

CAPITAL FIGHTS BACK

On January 11, 1991, Jeffrey Katzenberg, then CEO of Walt Disney Studios, sent a memo to his thirteen top executives titled “The World Is Changing: Some Thoughts on Our Business.” Despite its bland title, the twenty-eight-page note was instantly leaked to the press, probably by Katzenberg himself, and it swiftly became the most read prose in Hollywood. “We are entering a period of great danger and ever greater uncertainty,” the memorandum began. The change Katzenberg was worried about? The rise of superstars.

In 1984, when Katzenberg and his team arrived at Disney with a mandate to turn around the venerable but troubled moviemaker, Disney had been “the most cost-conscious of all studios.” It had saved money mostly “by avoiding the reigning stars of the moment.” Katzenberg wrote, proudly: “Instead we featured stars on the downward slope of their career or invented new ones of our own. Robin Williams suggested to Newsweek magazine that we recruited talent by standing outside the back door of the Betty Ford Clinic. The first instance of this approach to moviemaking was Down and Out in Beverly Hills, a film that reignited the careers of its three stars, Bette Midler, Richard Dreyfuss, and Nick Nolte.”

But as the decade progressed, Disney found itself paying its stars more. What particularly distressed Katzenberg was the Matthew effect—paying stars not just for their talent, but also for their fame, something Katzenberg called the “celebrity surcharge”: “In 1984, we paid Bette only for her considerable talent. Now, we must also pay her for her considerable and well-earned celebrity. This is what might be called the ‘celebrity surcharge’ that must be ante’d up when hiring major stars.”

Katzenberg’s biggest complaint was the signal achievement of “talent” in the second half of the twentieth century: the shift from earning a wage to having a stake in the business. Hedge managers and private equity investors call their stake “the carry.” Movie stars call it “participation.” Katzenberg called it “extremely threatening”: “Unreasonable salaries coupled with giant participations comprise a win/win situation for the talent and a lose/lose situation for us. It results in us getting punished in failure and having no upside in success.”

Actors weren’t the only talent Katzenberg worried about. Writers, he complained, were starting to be paid “$2–$3 million for screenplays.” Instead, Katzenberg thought Disney should be paying “young” writers $50,000 to $70,000 or “proven writers” $250,000 to develop a screenplay for an idea suggested by Disney. Katzenberg admitted that in the new world of superstar scripts, persuading writers to agree to these skimpier rations, ideally on long-term contracts, wouldn’t be easy: “I know many will argue that this just isn’t feasible anymore. Agents won’t let their clients sign long-term contracts because the spec script market is too lucrative. All this means is it will be tougher. It doesn’t mean it’s impossible.”

Katzenberg’s solution was for Disney executives to seek out actors and writers who were talented but either hadn’t achieved or had lost the superstardom that allowed those at the very top to charge a celebrity surcharge. “All the big-time writers have one thing in common,” Katzenberg wrote. “They were all once unknown and thrilled just to make a sale. The future big-time writers are out there and would be grateful just to be considered by our studio. To find them, we have to search harder, dig deeper… and be there first.”

As for actors, Katzenberg urged his team to “be aggressive… at the comedy clubs searching for future stars, and at the back door of the Clinic picking up the stars that once were and can be again.”

Katzenberg is not alone. As superstars have become more powerful, bosses in every field have struggled to find ways to avoid paying them the celebrity surcharge. In addition to haunting the back door of the Clinic, studio chiefs have shifted resources to animated films—illustrators, technologists, and voice actors don’t yet command a superstar premium—and to serials in which the character is the star, and the actor who plays him or her in one installment can be replaced by a cheaper successor if the original becomes too famous. Reality television and competition shows are another way to avoid paying the celebrity premium, by making the hoi polloi the stars and, as Pop Idol does, binding them to contracts that prevent them from demanding any of the upside if their shows make them famous.

Some sports team owners are on a similar quest to pay for talent, not stardom. That is the story of the Oakland A’s and their general manager, Billy Beane, as lionized in Michael Lewis’s Moneyball. Beane is Lewis’s underfunded, underdog hero, but his is really the story of capital—the baseball team owners—looking for a way to avoid paying the celebrity premium to its stars—the players—in this case by looking for athletes whose skills were crucial to the team’s success but were undervalued by the market.

Even in finance, whose superstars are less well known but even better paid than film and sports celebrities, some bosses have been looking for ways to avoid the celebrity premium. Harvard Business School professor Boris Groysberg became the hero of Wall Street’s HR departments in 2010 when he published Chasing Stars, a study that has become the banking industry’s Moneyball. After interviewing more than two hundred Wall Street analysts, Groysberg concluded that recruiting stars from rival firms was a waste of money, because poached analysts tended to falter when they were plucked from their native culture. Warren Buffett famously agrees. He emerged from his Omaha fastness to join the battle between capital and talent on Wall Street in the 1990s, when he briefly chaired struggling investment bank Salomon Brothers—a period he described in the next year’s letter to shareholders as “far from fun”—and slashed the bonus pool by $110 million.

But here is the catch in management’s fight to rein in superstar salaries, and one institutional reason the super-elite continue to rise: in the age of the vast, publicly traded joint-stock company, where ownership is widely dispersed and boards lack the time, expertise, and gumption to weigh in on the specifics of how companies operate, the managers themselves are superstars, too. Entertainers and athletes are the most visible superstars, but they are hugely outnumbered by the army of business managers who in the past four decades have been transformed from salarymen to multimillionaires.

The ideas Katzenberg laid out in his 1991 memo have been largely vindicated by subsequent academic research. Mostly strikingly, in a 1999 study analyzing the economics of two hundred movies, Abraham Ravid found that stars had no impact on box office revenue. Katzenberg had a powerful incentive to sniff out the financial danger of paying the celebrity surcharge—as Disney’s CEO, his job was to turn a profit. But the checks on soaring salaries of chief executives and their top teams are much weaker. Even superstars have bosses, but as Jack Welch, the first CEO to become a celebrity, said in a conversation at the 92nd Street Y in the spring of 2011, what the chief executive needs is “a generous compensation committee.”

Or a smart lawyer. Katzenberg’s big complaint about “the talent” was “participations,” or contracts that gave actors a share in a movie’s revenue. It turned out he had cut a similar deal himself, earning a share of the entire studio’s profits in addition to his cash salary and CEO perks. That package was big enough to make a dent not just in one movie’s profits but in the entire company’s bottom line, as Disney shareholders learned when the company settled a legal battle with Katzenberg over his severance package. The terms of the deal were undisclosed, but Hollywood lawyers estimated it was at least $200 million—more than four times the production costs of Dick Tracy, the overbudget movie that inspired Katzenberg’s 1991 cri de guerre.

Sometimes the title says it all. That was certainly the case in March 1986, when the Harvard Business Review published an essay headlined “Top Executives Are Worth Every Nickel They Get.” HBR is owned by the Harvard University, and its readers are the aforementioned top executives and their ambitious underlings. So one purpose of the essay was inevitably service journalism’s accustomed function of flattering its constituency. But the piece had a less cynical motivation, too. Its author, Kevin J. Murphy, was in the vanguard of a small group of business school academics who had spent the previous decade trying to solve one of the big problems of twentieth-century market economies: How do you have capitalism without capitalists? Or, to put it another way, who manages the managers?

This is not a new problem. In The Wealth of Nations, Adam Smith compared the executives of a joint-stock company to “the stewards of a rich man” and warned that “being the managers rather of other people’s money than their own, it cannot well be expected, that they should watch over it with the same anxious vigilance with which the partners in a private copartnery frequently watch over their own…. Negligence and profusion, therefore, must always prevail.” Writing just over a hundred years later, Alfred Marshall bemoaned the feebleness of the staid British joint-stock company, compared to an America dominated by owner-entrepreneurs: “The area of America is so large and its condition so changeful, that the slow and steady-going management of a great joint-stock company on the English plan is at a disadvantage in competition with the vigorous and original scheming, the rapid and resolute force of a small group of wealthy capitalists, who are willing and able to apply their own resources in great undertakings.”

That small group of wealthy capitalists laid the foundations for America’s astonishing economic ascent in the twentieth century. But as the American economy matured, control of its private businesses began to pass from the hands of the vigorous, scheming, and resolute founders of Marshall’s age to a new generation of stewards. That shift was documented in a seminal paper published in 1931 by Gardiner Means, a New England farm boy and steely-nerved World War I pilot who’d eventually made his way to economics and the Ivy League faculty. Means showed that of the two hundred largest U.S. companies at the end of 1929, 44 percent were controlled by managers rather than by their owners. An even greater share of the wealth of America’s top companies was in the hands of the managerial class—58 percent of the top two hundred companies, as measured by market capitalization, was manager ruled.

Means saw this ascendant managerial class as self-selecting and self-perpetuating: the only institutional parallel he could come up with was the clergy of the Catholic Church. In a book he and Adolf Berle, a professor of corporate law at Columbia, cowrote the next year, they described the rising managerial elite as “the princes of industry.” Berle and Means saw the shift from owners to managers as comparable in its significance to the switch from independent worker-artisans to wage-earning factory employees.

Berle and Means worried about how to keep this managerial aristocracy in check. Thanks to the ability of the publicly traded company to attract capital from millions of retail investors, this managerial class presided over firms of unprecedented scale and power. But the market incentives that governed the actions of owners didn’t necessarily apply to their stewards. In fact, their interests were “different from and often radically opposed to” those of the owners—the hired managers “can serve their own pockets better by profiting at the expense of the company than by making profits for it.”

Berle and Means were leading architects of the New Deal—Berle was an original member of FDR’s “Brain Trust,” and Means, working as an economist in the Roosevelt administration, waged a campaign against price fixing in the steel industry. Their prescription, accordingly, involved state and social intervention. Government should regulate managerial princes who overstepped the mark, and a new set of social conventions should be developed requiring managers to be “economic statesmen” who ran their companies in the collective interest.

Murphy’s “Worth Every Nickel” essay was a robust public statement of a radically different solution to the problem Berle and Means had identified. Like the New Dealers, Murphy and his confreres believed that managing the managers was the central problem of twentieth-century capitalism. But instead of trying to get corporate executives to behave more like public-spirited civil servants, Murphy and his fellow business school professors thought the answer lay in the opposite direction: the stewards needed to be turned into the red-blooded founder-owners they had replaced. To do that, their financial incentives needed to be aligned as closely as possible with the success or failure of the companies they ran. That wouldn’t give them as powerful a profit motive as owning the whole company, to be sure, but it would be a close second-best.

The “Worth Every Nickel” movement was in part a response to the success of the New Dealers’ efforts to create a social and political order in which managers were constrained. Thirty years after Berle and Means worried that managers would be tempted to profit at the expense of the companies they ran, here is how John Kenneth Galbraith, hardly an apologist for the C-suite, described the ethos of corporate America: “Management does not go out ruthlessly to reward itself—a sound management is expected to exercise restraint…. With the power of decision goes opportunity for making money…. Were everyone to seek to do so… the corporation would be a chaos of competitive avarice. But these are not the sorts of things that a good company man does; a generally effective code bans such behavior.”

In a follow-up to his Harvard Business Review cri de coeur, Murphy, along with his coauthor Michael Jensen, found that the culture of restraint in the postwar era could be quantified. During the three decades after the Second World War, the U.S. economy grew at a faster, more consistent pace than ever before, and American companies were ascendant around the world. The acknowledged social and economic leaders of this golden age were the country’s captains of industry, yet during that period their salaries actually fell. In our honey-tinged, Mad Men memories of the postwar era, we may imagine it to be a time of the triumph of the company man. But in fact it was an era when the managerial aristocracy was trammeled by the rest of society, even as the companies they oversaw prospered. As Jensen and Murphy concluded in their 1990 paper: “The average salary plus bonus for top-quartile CEOs (in 1986 dollars) fell from $813,000 in 1934–38 to $645,000 in 1974–86, while the average market value of the sample firms doubled.”

Jensen and Murphy agreed with Galbraith’s explanation of what was going on—social pressure was limiting CEO salaries: “Political forces operating both in the public sector and inside organizations limit large payoffs for exceptional performance.”

The Means and Berle solution to the rise of the managerial class had prevailed, and it seemed to be working. America’s companies were no longer run by their vigorous and self-interested robber baron founder-owners, but the new salaried stewards who had replaced them weren’t looting the corporate kitty. Governed by a “remarkably effective code,” their incomes were actually falling. They seemed to be doing a pretty good job, too. The companies under their stewardship doubled in size between 1932 and 1976, the total real compound annual return on the S&P 500 was 7.6 percent, and America’s GDP had quintupled.

But by the late seventies and the eighties, when Jensen, Murphy, and their like-minded peers began to investigate CEO pay, the economic picture was starting to darken. Economic growth seemed to stall even as inflation rose—remember stagflation. Corporate America, too, seemed sluggish, risk averse, and under threat from more innovative foreign rivals. These were the conditions that inspired the liberal economic revolution more generally, and also a rethinking of what was happening in the corner office.

As Berle and Means had warned in the 1930s, the problem started with the twentieth-century fact that the economy was largely run by “stewards” rather than owners. But the New Dealers’ fear that these managerial aristocrats would line their own pockets hadn’t come true—indeed, the opposite was the case. And that, Jensen and Murphy warned, was the problem. The social constraints that prevented executive looting also meant executives had weak economic incentives to do an outstanding job. The New Dealers had transformed hired-gun CEOs into capitalist civil servants—public-spirited and self-restrained. The “Worth Every Nickel” business school professors wanted to turn the managerial class into red-blooded capitalist owners.

Their solution was “pay for performance.” Managerial compensation should be more tightly tied to how well they did their jobs and, in particular, to how well their companies performed.

By one measure, the academic advocates of pay for performance were remarkably effective. After falling steadily during the postwar years, CEO salaries began to soar. The real takeoff was during the 1990s: by the end of that decade they were growing by 10 percent a year. As Roger Martin has calculated, for CEOs of S&P firms, the median level of pay soared from $2.3 million in 1992 to $7.2 million in 2001. That’s a lot of money, and a growing share of the overall income of corporate America. Between 1993 and 2003 the top five executives of America’s public companies earned $350 billion. Between 2001 and 2003, public companies paid more than 10 percent of their net income to their top five executives, up from less than 5 percent eight years earlier.

These were, of course, the decades when the 1 percent broke away from the rest of the pack in society as a whole. That happened inside corporations, too. Rising CEO compensation pulled the boss ever further from the factory floor or the cubicle rows. In the early 1970s, CEOs earned less than thirty times what the average worker made; by 2005, the median chief executive made 110 times what the average worker did. And just as income inequality in society overall has become more pronounced at the very, very top, the gap has grown between CEOs and their direct reports. Until the early 1980s, the chief executive earned about 40 percent more than the next two most highly paid managers; by the early twenty-first century, he made more than two and a half times as much.

This gap is no accident—it is inevitable in an economic model in which the CEO has gone from being the company man of Galbraith’s postwar account to the free-agent superstar of the pay-for-performance era. That shift was made starkly apparent when two economists at the London School of Economics asked a simple question: “Does it matter whether you work for a successful company?” The answer from HR is—of course! And our corporate Web sites duly urge us to be team players and to root for our firm’s overall success. But when Brian Bell and John Van Reenen looked at what actually happens in a sample of companies covering just under 90 percent of the market capitalization of Britain’s publicly listed firms, they came up with a chilling reply. CEOs and executives at the very top are rewarded for corporate success, but almost no one else is: “A 10 percent increase in firm value is associated with an increase of 3 percent in CEO pay, but only 0.2 percent in average workers’ pay.”

These growing chasms within companies didn’t just mirror the broader rise of the 1 percent, they also drove it. Executives working outside finance (a category all its own) were 31 percent of the 1 percent in 2005, the single largest group. They account for an even larger share of the 0.1 percent—42 percent in 2005.

A couple of decades earlier, György Konrád and Ivan Szelényi had revealed the politically uncomfortable truth that in the so-called workers’ states the real winners—and the real bosses—were the intellectuals, particularly their technocratic branch. They are coming out on top in market economies, too. It is the MBAs on the road to class power.

Under communism, the rise of the intelligentsia was undeniably a political process. But the academic theory underpinning the rise of the MBA class in the West is all about market forces. The goal of the pay-for-performance revolution, after all, wasn’t to raise CEO compensation, although that was certainly one of its consequences. The point was to get the managerial aristocrats to do a better job by more closely tying their paychecks to their impact. On this reading, the soaring salaries of CEOs, and the growing gap between them and their senior lieutenants, is one chapter in the broader story of superstar economics. Once companies began to do a better job of tying pay to performance, they discovered that some managers were more talented than others, and those stars, like the best singers or lawyers or chefs, could command a significant financial premium.

For the CEO to be a superstar—and to be paid like one—he has to stop being a company man. The executives of the postwar era were corporate lifers. They were the creations and the servants of their companies, and a great deal of their value came from their knowledge of the particular corporate cultures that had created them and the specific business they did. The superstar CEO cannot be tied to a single corporation and, ideally, not even to a single industry. He must be an exemplary talent whose skill is in “management” or in “leadership.” He is more likely to have an MBA—28.7 percent of CEOs did in the 1990s, compared to 13.8 percent in the 1970s—and less likely to be a loyalist of a specific firm. If these are the general skills we believe it takes to lead successful businesses, the world’s companies will engage in a bidding war to secure the services of the men and women who are the world’s best managers and leaders.

That is exactly what has happened. The surge in CEO salaries coincided with a rise in bosses hired from outside the firm. In the seventies and eighties, when CEOs were paid less than they had been in the 1930s, 85.1 percent and then 82.8 percent of chief executives were company men. But in the 1990s, as CEO compensation came to vault upward by 10 percent a year, more than a quarter of chief executives were appointed from outside their firm. Jumping to a new company was a good way to get a raise—external hires, according to research by Kevin Murphy, one of the leaders of the pay-for-performance school, made 21.6 percent more than chiefs appointed from inside. In sectors where these portable general managers are most valued, all CEOs earn more—a premium of 13 percent.

One of the drivers of superstar incomes in other professions is the economics of scale—singers who can perform for millions, designers whose styles can be sold around the world. Size can be a reason to pay CEOs more, too. As companies get bigger thanks to the globalization and technology revolutions, the economic impact of good management increases. The world’s very best CEO may be only marginally better than the hundredth best. But if your company’s annual revenues are, say, $10 billion, then just a 1 percent difference in performance is worth $100 million. Sure enough, as economists Xavier Gabaix and Augustin Landier found in a 2008 paper, “The six-fold increase of U.S. CEO pay between 1980 and 2003 can be fully attributed to the six-fold increase in market capitalization of large companies during that period.”

But there is one very big problem with the superstar CEO model, and it goes back to the challenge posed by the rise of the managerial aristocracy that first Berle and Means and later the pay-for-performance school grappled with. It is what economists call the agency problem, and it means that CEOs are a very special sort of superstar: the one who is in charge of the company that pays his salary. Superstar athletes are paid by the owners of sports teams, superstar chefs by their diners, and even superstar hedge fund managers are paid by their investors. But CEOs are paid by the companies they run. Their compensation, to be sure, is determined by the board of directors, but, particularly in the United States, the chairman of the board is often the CEO.

“In the U.S., you can more or less do whatever you want, without having the support of the owners,” Mats Andersson, the chief executive officer of the Fourth Swedish National Pension Fund and critic of corporate governance in the United States, told me after speaking at a conference on the issue convened in Washington by the Securities and Exchange Commission. “Because of the composition of the boards in Sweden, the company’s big decisions all have to be based on the mandate or the support of the owners.

“Who is actually responsible for executive remuneration in U.S. companies?” Mr. Andersson said. “If I could decide on my own salary, I would certainly love that system.”

Adam Smith forthrightly warned that the consequence of the agency problem was “negligence and profusion.” Academic economists today use a more delicate term: “skimming.” A decade ago, two young economists, Marianne Bertrand and Sendhil Mullainathan, came up with an original way to investigate whether CEOs were superstars, being rewarded by their firms for exceptional performance, or whether they were stewards who were rigging the rules of the game in their own favor. The test was to see whether performance-based CEO pay responded as strongly to external good fortune as it did to managerial prowess. Two of the examples of outside luck were changes in the price of oil and changes in the exchange rate. Bertrand and Mullainathan found that luck matters: “CEO pay is as sensitive to a lucky dollar as to a general dollar,” which is to say for overall good company performance. They found, for instance, that a 1 percent increase in the revenues of oil companies because of an increase in the price of oil led to a 2.15 percent increase in CEO pay. Better still, from the perspective of the oil chief, while an increase in the price of oil always correlated with an increase in the CEO’s paycheck, when the price fell, the CEO’s salary didn’t necessarily decline: “While CEOs are always rewarded for good luck, they may not always be punished for bad luck.”

Thanks to the financial crisis and the global recession it triggered, public opinion and politics in much of the world are catching up to this ivory tower critique. Consider Britain, where a Conservative-dominated coalition government began 2012 with a proposal to rein in executive pay. “We cannot continue to see chief executives’ pay rising at 13 percent a year while the performance of companies on the stock exchange languishes well behind,” Vince Cable, the business secretary, told parliament as he announced the new measures. “And we can’t accept top pay rising at five times the rate of average workers’ pay, as it did last year.”

Cable’s reference to the gap between CEO salaries and those of average workers is telling. We may frame our complaints about rising executive compensation with arguments about skimming—that the millions are unearned. But part of our unease stems from something entirely different—that the final outcome, the gap between CEOs and the rank and file, is wrong.

This second concern may very well not be solved by doing a better job of coping with the agency problem. Bertrand and Mullainathan’s finding that there is a lot of skimming going on in the corner office doesn’t, it turns out, make them complete skeptics of the pay-for-performance revolution. Pay for performance actually works, but only in companies where the board is strong enough to truly oversee the chief executive. Boards are best able to do that, Bertrand and Mullainathan discovered, when they have a large shareholder. “An additional large shareholder on the board reduces pay for luck by between 23 and 33 percent”—a big number, especially when you consider how tricky it is in real life and in real time to distinguish between lucky profits and hard-earned ones.

There’s a reason for CEOs to position themselves as superstars—highly talented people being paid for their skill—that goes beyond getting a great deal from the comp committee. Even in an age of tension between the 99 percent and the 1 percent, we love our superstars. That’s because, as the New York Times’s David Carr put it in a deft analysis of the popularity of basketball player Jeremy Lin, in aspirational America, we all like to think that we are superstars-in-waiting, on the verge of our big break: “The Lin story has broken out into the general culture because it is aspirational in the extreme, fulfilling notions that have nothing to do with basketball or race. Most of us are not superstars, but we believe we could be if only given the opportunity. We are, as a matter of practicality, a nation of supporting players, but who among us has not secretly thought we could be at the top of our business, company or team if the skies parted and we had our shot?” That’s the irony of superstar economics in a democratic age. We all think we can be superstars, but in a winner-take-all economy, there isn’t room for most of us at the top.