“Well written and an interesting perspective.” Clan Rossi --- “Your article is too good about Japanese business pushing nuclear power.” Consulting Group --- “Thank you for the article. It was quite useful for me to wrap up things quickly and effectively.” Taylor Johnson, Credit Union Lobby Management --- “Great information! I love your blog! You always post interesting things!” Jonathan N.

Wednesday, March 26, 2014

New Rules for a New Millennium: Russian Invaders Beware

At a joint EU-US news-conference on 26 March 2014, Presidents Barroso, Van Rompuy, and Obama discussed the problematic Russian invasion of the Crimea province of Ukraine.  The “chairman” of the European Council and the “chief executive” of the European Commission both responded to concerns that the European Union had not stood up to its business interests in order to enact economic sanctions capable of putting Putin back in his pen. Even though the two EU presidents sought to "puff up" the force latent in the sanctions already in place, Barroso insightfully made the more significant point that invading parties just aren't done in the new century. In this essay, my task is to sideline the debate over whether the EU had not stood up to its domestic commercial interests in favor of Barroso's much more significant point concerning a paradigm shift whose time of recognition has come in the new millennium.

Did President Obama miss a chance to put Putin's exploits into historical perspective? President Barroso may have come out of the news conference as the visionary leader. 
(Image Source: Reuters)

The turn from one century to another is admittedly artificial as far as empirical (i.e., observed) change is concerned “on the ground.” Even so, a temporal benchmark, especially if involving a new millenium, can spark a “taking stock” of gradual shifts in values and practices that would otherwise go unnoticed in their accumulated significance. Put another way, while turning 50 does not instantaneously trigger the symptoms of aging, the milestone birthday can prompt a person to recognize that he or she is no longer young.

To the people who came of age in the twentieth century, the twenty-first is virtually synonymous with innovation. This really is saying something, given how much technological innovation took place during the twentieth century—daily life in the previous centuries having differed only modestly from each other. Of course, innovation in one sphere, such as electronics and transportation, does not necessarily spark development in other spheres, such as political organization. From the vantage-point of the early twenty-first century, therefore, I submit that a certain amount of impatience regarding “old habits that die hard” is quite understandable.  With daily life so novel and open-ended grace á computer technology, it is only natural to perceive Putin’s “Hitleresque” military adventurism into Ukraine as antiquated as stale smoke in a “smokeless city.” Being unaccustomed to people smoking in hotels, bars, and restaurants, a person might be shocked at a smoker’s stubborn refusal to be courteous and say, “That just isn’t done anymore.” To the smoker in a decadent establishment (and city), the non-smoker’s objection is easily dismissed.

Similarly, Putin undoubtedly found it quite easy to dismiss the public statements of political leaders around the world critical of (and likely shocked by) his blazen land-grab. Invading other countries just isn’t done anymore, yet there was Putin willfully violating the post-World War II norm anyway. Ironically, he could go right along with his “bad habit” with virtually impunity on account of the existing system of international governance, whose growth had been stunted by the antiquated notion of “absolute national sovereignty” that had been espoused by Jean Bodin and Thomas Hobbes centuries earlier. Putin’s selective recognition of national sovereignty (i.e., Russia’s but not Ukraine’s) is but one sign that the existing global governance system is, like the US Articles of Confederation in the eighteenth century, structurally unsound. Not coincidently, so too is the structural and procedural over-emphasis of the state governments in the EU (hence the mitigated sanctions against Russia).

Beyond the impotence of the United Nations and the imbalances in both the American and European empire-scale federal systems lies the true significance of the world’s novel perception of Putin’s power-grab beyond Russia’s borders. President Barroso made the point in the news conference. The real problem is this, he said; “in the twenty-first century it’s just not acceptable that one big power takes part of another sovereign country recognized by the United Nations.” The world had significantly changed—the people had changed—such that Hitler simply would not be tolerated in the modern, or post-modern, admittedly “high-tech” novel context. The matter of economic costs that the EU and US were seeking to impose on Russia pales in comparison, according to the EU Commission’s head. Obama only vaguely touched on the central motif, yet crucially without differentiating the current century from the last, very bloody, one. “It’s about the kind of world in which we live,” he said, including respect for national sovereignty and international law, both of which Russia violated in invading Ukraine.

Whereas pallid calls to respect other countries’ respective national sovereignties and international law could easily be heard as antiquated in themselves by 2014, the notion that the old bad military habits were no longer acceptable in the new millennium was new; accordingly, the “game change” required an infusion of energy to counter the spinning inertia of the old habits that die hard, if at all. By analogy, an old smoker mired in his old ways as a sort of ongoing entitlement might need to be escorted out of a bar or restaurant in a newly “smokeless city” before the message sinks in. Generally speaking, a person used to doing something as it’s always been done will need to feel the considerable force of the “new rules” for them to have any effect. In terms of international relations, the problem facing the civilized world is how to apply the force without resorting to the (also antiquated) knee-jerk military response.

I suspect that coming up with the requisite energy to apply as an obstacle capable of effecting the “game change” of the new century “on the ground” must involve standing up to domestic commercial interests that naturally oppose economic sanctions that hurt the bottom line. By implication, development of international governance mechanisms not based on the historical notion of absolute sovereignty is vital as well. At least with regard to the EU, the commercial interests and their political patrons seemed unwilling to make the self-sacrifice necessary to amass the sort of “new energy” necessary to expunge the old-time bully from his exploits as if to say, you really cannot do that anymore.

In short, the muted sanctions against Russia really point to the lapsed state of new alternative “energy sources” and corresponding international governance mechanisms capable of transmitting a voltage capable of putting up a “fire wall” without a myriad of obstacles poking holes in it. By not taking sufficient stock of the energy required to stop the well-grooved momentum of yesteryear from coming around again, governmental officials around the world are holding the new millennium back from breaking through—and with it, a sorely needed burst of fresh air.


Tuesday, March 18, 2014

Social Media Marketing: The Social Element as an End in Itself

In the religious domain, some people struggle with the inherent incongruity of acting selflessly while believing that the righteous are rewarded in heaven. Resolving this oxymoron in practical as well as theoretical terms may come down to “one hand not knowing what the other hand is doing.” Whether innate or a “learned skill,” disentangling a practice from any hope of reward can be applied to social media marketing. This application is easier said than done, especially in a culture of greed saturated with opportunism at every opportunity as a strong norm and custom. Indeed, the underlying question may be whether a “strong personality” once well-engrained is able, not to mention willing, to “park itself out back” if even for a much-needed break.

Gary Vaynerchuk, the author of several books on social media marketing, preaches a two-step approach, which can be characterized as the marketer becoming a native in whichever (social media) platforms he or she is in and then consummating the (ultimately) desired transaction. Ideally, the selling fuses with becoming a native (or recognized as one), so the two phases are “phased” into one.

Crucially, being able to come across as a native is not the same as “going native.” Whether in business or government, putting up a front in order to be perceived by the masses as one of them is not the same as being one of them. Even though Vaynerchuk emphasizes the need to respect the nuances of a given social media platform (e.g., values and mannerisms), he may be interpreted by some readers as maintaining that presenting the appearance of respect is sufficient to “become” a native, at least for marketing purposes. In other words, a marketer need not “go native”; going through the motions is sufficient as long as the other participants believe that the entrant is satisfying their social or informational objectives.  

Unfortunately, learning how to come across as a native in sync with a platform’s distinctive “cultural” mores and norms may be too short-sighted not only in terms of “going native,” but also in achieving marketing objectives. In fact, the approach itself may be too self-serving—too close to those objectives—to render the marketer as a native. Positing a distance between engaging the social element and being oriented to making the sale, Vaynerchuk advises that in contributing to the social or informational dialogue at the outset, “you’re not selling anything. You’re not asking your consumer for a commitment. You’re just sharing a moment together.”[1] The experience shared is essentially an end in itself, eclipsing any further motive, as in to sell a product or service.

It may seem rather strange to find a marketer oriented to exploiting any opportunity “just sharing a moment together” with electronic strangers as an end in itself; serial opportunists in an enabling cultural context are used to treating other rational beings as mere means at any opportunity, rather than as ends in themselves (i.e., violating Kant’s Kingdom of Ends version of his Categorical Imperative). Vaynerchuk may undercut his own depiction of the shared experience as sufficient unto itself by reminding his readers that the “emotional connection” they “build through [participating in social or informational dialogue without selling but to “become” a native] pays off [when they] decide to throw the right hook [i.e., make the sale’s pitch and consummate a transaction].”[2] With such a payoff in the offing, I doubt that virtually any marketer oriented to “maximizing” any opportunity to tout, brag, or hard-sell would just share an emotional connection at a moment without being motivated by, or at least mindful of, the hidden agenda.

A "stakeholder model" approach to social-media marketing. This framework is inherently self-centric, whereas a web-like structure would be more in line with "shared experiences." Both frameworks are distinct, and yet can be managed, or related, such that neither encroaches on the other unduly.
(Image Source: irisemedia.com)

As difficult as it may be for a marketing personality to simply share a moment with another human being—especially a stranger narrowly glimpsed through electronic means—Vaynerchuk rightly situates the feat as a requirement for “going native,” and thus, ironically, for being able to ultimately make the sale. In the context of authentic social and informational reciprocity in a given social-media platform, a wax figure easily stands out. Even so, all too many marketers come across as stiff, or contrived, in social media as if self-centeredness and lying advances rather than detracts from sales. Hence, I suspect that a rather intractable marketing personality and a related and thus enabling culture, such as that of American business, stands in the way of business being able to fully integrate social-media marketing.[3] 

Similar to why it is difficult to fall asleep without taking a break from trying to do so, marketers have trouble not letting their marketing hand know what their other hand is doing. At the very least, managers overseeing marketing would need to permit and even encourage the marketer(s) tasked with social-media marketing to spend time online without worrying about having to sell anything (even oneself). In hiring such marketers, managers ought to highlight rather than sideline those applicants who enjoy being on a social media platform.




[1] Gary Vaynerchuk, Jab, Jab, Jab, Right Hook (New York: Harper, 2013), p. 22.
[2] Ibid, p. 23.
[3] The fixation on using any opportunity to sell one’s wares is exemplified in CNBC’s Jim Cramer’s choice of response as another host mentioned on March 14, 2014  that Jim had worked that weekend at his restaurant. Rather than share the moment by regaling his colleagues and the viewers with a tale of something enjoyable from his weekend at his restaurant, he remarked as if by script that he had worked that weekend because “we were trying out a new chicken sandwich” and a new drink. The sheer contrivance belied any semblance of authentic passion, as might be realized in relishing simply experiencing being in his restaurant (e.g., the atmosphere) and later telling people about it instead of selling as if it were an end in itself. Underneath the obsession with getting as much as possible from any opportunity is greed, a motive and value that knows no limitation. Ultimately, it is a well-worn grove that keeps marketers from “going native” and thus being able to fully inhabit social media. 

Sunday, March 9, 2014

Meteorology vs. Astronomy: Is It Spring Yet?

Advancing clocks an hour ahead to Daylight Savings Time conveniently announces itself as the easy-to-remember Spring Forward. Advancing democracy in the Middle East in the early years of the 2010s proclaimed to the world the Arab Spring. Advancing global warming foretells earlier springs encroaching on softened winters. Even as spring blooms in the sights of the popular press, the media quite stunningly often stumbles over when the season begins. The groundhog is no help, differing from year to year on whether spring begins four or six weeks from February 2nd. Astonishingly—and in no small measure my impetus in writing here out of no less than dumbfounded curiosity—even television meteorologists play to the popular ignorance, willingly succumbing to the common practice of taking astronomical “spring” as meteorological spring too. The “professionals’” declaratory tone alone reveals just how certain human beings can be even of presumed knowledge lacking any real foundation.  Sadly, this mentality of assertion, having become so widespread, or ubiquitous, in modern society, is virtually invisible to us; and yet, the shrill of the epistemological missionary zeal reverberates from no less than modernity’s default: faith in one’s own use of reason. In this essay, I present the first day of spring as a case in point rather than make the entire argument.

Sometime during the first week of March 2014, as yet another front of frigid Arctic air charged southward through North America, various weather personalities on television newscasts relished in the apparently startling fact that spring was just two weeks away. Viewers looking out at snow-covered landskips as far south as Kansas City could marvel at the return of nature’s colors and smells so soon. Most years, the grass is green there by the Ives of March.

Even as the popularly broadcast juxtaposition made for good copy, meteorological spring in the Northern Hemisphere had already come—that is to say, with regard to weather and climate. According to the U.S. National Oceanic and Atmospheric Administration, “(m)eteorologists and climatologists break the seasons down into groupings of three months based on the annual temperature cycle as well as our calendar. . . . Meteorological spring includes March, April, and May; meteorological summer includes June, July, and August; meteorological fall includes September, October, and November; and meteorological winter includes December, January and February.”[1] Therefore, by the first week of March 2014, spring had already arrived as far as the weather is concerned even as television meteorologists were publicly pointing to March 20th as the first day. 

Even calling so much attention to the first day, as if suddenly the northern climes of the contiguous United States would suddenly return their fauna and flora to their other half-lives on that day, is horribly misleading. Assuming that the meteorologists were well aware that spring weather data begins on March 1st of each year in the U.S., the next-most plausible explanation may be found in the lazy assumption that it is easier to go with popular misconceptions than expend the effort to stare one in the face and overcome its stolid inertia head-on (the excuse being not wanting to cause confusion).

As a result, Americans are left with the incredibly incongruent “expert” assertion that summer begins not with Memorial Day, but just a couple of weeks before July 4th on June 21st of each year. Essentially, we are to believe that summer begins in the middle of summer! That such a logical and experiential absurdity can long endure in spite of evidence to the contrary is itself evidence of just how much cognitive dissidence human beings are willing to endure in the face of declarations from perceived expertise. In other words, an erroneous or outdated status-quo societal default has tremendous hold even in the age of (rationalist) Enlightenment (i.e., from the fifteenth-century Renaissance period).

Lest it be said that the enabled popular misconception came spontaneous out of nothing ex nihilo, the basis of the confusion lies in the rather stupid decision to apply the names of the meteorological seasons (i.e., fall, winter, spring, and summer) to the four quadrants of the Earth’s orbit around the sun. Whereas the meteorological seasons are based on the annual temperature cycle applied to the calendar, “astronomical seasons are based on the position of the Earth in relation to the sun.”[2] Due to the tilt of the planet, solar energy is maximized in the Northern and Southern Hemispheres in different parts of the planet’s orbit. To label a certain interval of space as “spring” is not just highly misleading; the label is a category mistake, for the climatic seasons on Earth do not exist in the void of space.[3]

Astronomy is distinct from weather, even though the two are related (i.e., not disparate).
(Image source: NASA)

Put another way, astronomical “spring” in the Northern Hemisphere refers to the portion of the Earth’s orbit from the point at which the vertical rays from the Sun hit the Earth on its equator (on the “Spring” Equinox, usually on March 21st) to point when the vertical rays are on the Tropic of Capricorn (the furthest north the vertical rays go, on the “Summer” Solstice, usually on June 21st). In fact, Summer Solstice is better translated as the highpoint rather than beginning of summer. That is to say, the sun reaches its highest arc in the Northern sky on June 21st, which is neither the pinnacle nor beginning of summer in terms of temperatures.[4]
 
In short, the piercing pronouncements on the public air-waves of the beginning of spring (and then three months later of summer) ring hollow. Nevertheless, the meteorologists who trumpet the good news do so year after year, as if deer caught in a car’s headlights (or speaking to such deer!). Perhaps the fix is as simple as changing the names of the Earth’s orbit’s four parts so they are not likened to climatic seasons. The puzzle would doubtless still present itself as to how it is that nonsensical claims can long endure as a societal (or even global) default, taken for granted in a way that strangely wards off reason’s piercing rays and those of our own experience. Something is oddly off in how human beings are hard-wired.



[1] National Climatic Data Center, “Meteorological Versus Astronomical Summer—What’s the Difference?” National Oceanic and Atmospheric Administration, June 21, 2013 (accessed March 9, 2014).
[2] Ibid., italics added.
[3] As another example of a mislabeling that should have been known to trigger much confusion and even false claims, the three law instructors from Harvard who founded the law school at the University of Chicago at the beginning of the twentieth century should have known better than to replace the name of the bachelors in law, the L.L.B. (i.e., bachelors in the letters of law), with a name implying a doctorate (the J.D., or juris doctor). The actual (professional and academic) doctorate in Law is the J.S.D., the doctorate in juridical science, of which the LL.B., or J.D., along with the LL.M. (masters), is a prerequisite and thus not possibly a doctorate in itself. A doctoral degree must be the terminal degree in a school of knowledge, have comprehensive exams in a discipline of said knowledge (graded by professors rather than an industry regulatory body), and include a significant work of original research (i.e., a book-length study, except in a quantitative or scientific field) that the candidate defends before a committee of faculty. Yet how many Americans correct an American lawyer who declares himself to be a doctor?  The same goes for the M.D. as well (a program of survey-courses followed by a couple years of seminars—the typical substance of a bachelors program), and yet how many physicians and surgeons presume themselves entitled to use the doctoral title (Dr.) even as they dismiss the valid appellations that holders of the Ph.D., J.S.D., D.Sci.M. (Doctorate in the Science of Medicine), D.B.A. (business), D.D. (divinity/theology), and D. Ed. (education) use as per the rights and privileges of these doctoral degrees?  Meanwhile, the general public goes on grazing as if the snow were green grass.
[4] The word solstice in English comes from the Latin word, solstitium, which combines sol (sun) and stit (from sistere, to make stand).  In other words, the sun is made to stand (highest) in the Northern Hemisphere on June 21st of each year. Nothing is thus implied about any beginning; rather, the implication is that of a pinnacle or high point. Yet even in this sense, meteorological summer is different, for its high point in terms of temperature comes in mid to late July. 

Friday, March 7, 2014

Former Fed Chair Greenspan: How to Break the Back of a Bubble

While being interviewed on CNBC on March 7, 2014, Alan Greenspan spoke a bit on the problem of irrational exuberance in a market. Pointing to the failure of the Federal Reserve under his chairmanship to innocuously dissolve the “dot.com” bubble in the 1990s, Greenspan said he had come to the conclusion that asset-appreciation bubbles cannot be “defused” (for reasons he says are in his new book) “unless you break the back of the actual euphoria that generates the bubbles.”[1] Alas, piercing that wave would involve nothing short of unplugging a basic instinct in human nature; both monetary and fiscal policy would doubtless come up short. However, I suspect that the field of rhetoric may have something to say about how we can deflate societal exuberance, but only on the condition that greater clarity will have been achieved in identifying whether a given market is overvalued due to emotional excess (i.e.g, emotive greed having reached a critical mass) circumventing normal risk-aversion.

Greenspan’s prescription may have more to do with social psychology than economic theory. Even though the former central banker’s expertise or ken does not extend to psychology or sociology, the advice darts right to the central question to be researched. I am not suggesting that the claim be swallowed whole; back in 2008 after Lehman Brothers’s financial collapse and the subsequent  portent of a tsunami so powerful it could take the entire global financial system “by Monday,” Greenspan admitted in Congressional testimony that his mental model of financial economics suffers a fatal flaw he had had not seen coming.

Having held a free market, or laissez faire (let it make or do), theory firmly ensconced in his head, Greenspan suddenly realized that the market mechanism may not “price additional risk” once the market volatility reaches a certain point. Instead of asset-prices plummeting until enough buyers return to the market after having been spooked, the financial markets themselves freeze up. This is why Greenspan’s successor, Ben Bernanke, told Congressional leaders in September 2008 that without a bailout “we might not have an economy by next Monday.”


In other words, Greenspan’s paradigm or theory, which had insisted that markets can always self-correct could not account for the credit-freeze that began in the commercial paper market (over-night inter-bank loans). High volatility in a system combines with the high risk (from anticipations of system risk being actualized) shuts down the market mechanism itself. When he ran the Federal Reserve, Greenspan had been very wrong about the impact of the systemic risk on the ability of markets to keep operating.

As if Greenspan's admission had been part of some nightmare or some figment of the imagination, Andy Sorkin, a financial markets host at CNBC, welcomed Greenspan on the air five years later with such vaunted praise that viewers could be forgiven for not having remembered that Sorkin had pointed to Greenspan’s fatal flaw as one factor among several in the near collapse of the housing market in the wake of Lehman's bankruptcy. Surely Sorkin was hardly oblivious to the ex-central-banker's grave error. Why then did the journalist act as if Greenspan were one of the priests at the Greek oracle? 

Even after admitting the fatal flaw he had held at the Fed, Alan Greenspan still enjoyed considerable respect. (Image Source: The Guardian)

The short answer may be that Sorkin did not want to lose any of the rich and powerful friends on Wall Street he had interviewed in 2008 for his book. For a person to admit the existence of a fatal flaw in his or her ideology and therefore in any supporting theoretical models as well, and then be treated as though infallible on another body of knowledge (i.e., international relations) stretches the mind's capacity for holding a logical contraction (i.e., cognitive dissidence). Rather than being limited to Sorkin, I suspect that the refusal or inability to put a person's present statements in the context of his or her past track-record is by now "hard-wired" into American society. The over-valuing of the new at the expense of the past probably enables the denial. 

Regarding Sorkin, his fawning before his notable interviewee, including exclaiming "wow" as Greenspan went on bragging at the beginning of the interview, strikes me as blatant enough to be misleading. Especially in having written a non-fiction book about the financial crisis of 2008, Sorkin should have prepped the television viewers up front, so they would not find themselves back to swallowing wholesale what Greenspan says as the Gospel truth. In fact, Sorkin may have inadvertently opened the door to another systemic bubble hitting us as a complete surprise. 




1.Greenspan Revisits ‘Irrational Exuberance,” CNBC, March 7, 2014 (accessed same date).

Sunday, March 2, 2014

Über “Surge-Pricing”: There’s a Mobile App for Price-Gouging!

A week into 2012, The New York Times ran a piece on Ubur (as in Übermench?), a taxi and livery company founded in 2009. As Curtis Lanoue aptly describes in his essay on the company, its novelty consists of a unique mobile app that passengers, drivers and the company’s managers use to bring demand and supply into equilibrium by means of differential pricing including “surge pricing.”[1] The price of a taxi or livery depends on temporal and geographic demand and supply levels. That is to say, the pricing increases as more people request rides. Theoretically, the pricing should go back down even in situations in which the demand is high as drivers are enticed to continue driving a few more hours. Hence, the wait time for an Uber cab after a concert or sporting event should be reduced even if the first people out have to wait until the price goes down or pay more than they expected. In this essay, I suggest that such “stickiness” in even such a small-scale market mechanism as a mobile app can give rise to some formidable ethical problems.

Price-gouging primped up as a mobile app?  (Image Source: thevirge.com)

As one case in point, the Times showcases that of Dan Whaley, a tech entrepreneur in San Francisco, who on the New Year’s Eve ending the financially dark year of 2008 got into a black Town Car for a one-mile taxi ride to a party. “The ride cost him $27. At the end of the night out, [he] took a Town Car home from the party. This time, the exact same ride cost $135.”[2] Apparently not enough of the company’s drivers were enticed to keep driving for the price to deflate. Hence for many customers, “price surge” has since become synonymous with “price gouge.”

What if Dan Whaley had nearly maxed out his credit card and did not have enough cash left after the party to cover the difference? Whereas a prospective airline passenger and hotel customer can know the prices up front, before travelling, an Uber taxi or livery passenger knows the actual fare being charged only at the end of the ride as his or her credit card is being charged. Therefore, Dan Whaley could not have planned his night around how much he would have to spend on his rides. In other words, the supply-corrective theoretically bringing the pricing back into line is insufficient. Even in a competitive market, enough drivers could pass on even the higher fares showing rather than chance that they would be appreciably lower by the end of the extra rides; other drivers might simply prefer not to have to deal with drunk people on New Year’s Eve, while still other drivers crave sleep at any opportunity cost.

Meanwhile, Travis Kalanick, one of Uber’s co-founders, was not able to discerned the difference between long-term uncertainty that can be avoided or managed and the short-term variety that can leave people strayed without a ride home on a cold night, or before a major hurricane. “Sure it’s about the regularity, but someone who is driving a car on a regular occurrence deals with dynamic pricing all the time: it’s called gas prices,” Kalanick quipped as if blatantly adding insult (of ignorance) to injury as if doing so were good customer service.[3]  “Because this is so new,” he added, “it’s going to take some time for folks to accept it.”[4] Well, maybe “folks” were not accepting “surge pricing” because they had seen “behind the curtain” and found “price gouging” at their expense. Perhaps rather than assuming that the public needed to be educated on the “advanced” pricing mechanism, Kalanick might have humbly considered whether he himself needed to be educated. At the very least, finding a credit-card receipt at the end of a ride home showing a charge six or seven times that of the ride out is not like finding the price of gas has gone up a bit from the last week or locking in a higher “peak-season” airfare before flying

Besides the likely hazardous consequences from the lack of preparedness possible due to “price surge” pricing, using the seemingly innocuous new label to mask what is essentially price gouging is mendacious or at least highly sneaky can also be regarded as unethical in business.

Even after hurricane Sandy hit New York City in late October, 2012, high demand triggered Uber’s “surge pricing,” with fares doubling. Bilking customers just after a major storm that had taken out the subway system does not exactly gain the confidence of the public or existing customers. Being stranded at such a time is something more than a minor inconvenience. Uber’s New York general manager, Josh Mohrer, did little to assuage customer anger by pointing out that he had increased driver pay because “the opportunity for our driver partners to earn money results in more drivers coming out.”[5] After a major hurricane, more factors than money are involved in how many divers (can) come out; the doubling of fares suggests that the number of drivers who did manage to get on the non-flooded streets was not enough to bring the price down. Nevertheless, a company spokesperson announced that “surge pricing would be enabled because the company ‘want(s) to provide [customers with] a reliable service.’” [6] Apparently doubling the fares on the spot is consistent with reliability (as well as making up for some of the hit from raising the drivers’ pay during the disaster). Furthermore, it would seem that a management “doing its part” in its responsibility to the society involves one group benefitting while another suffers. Are not employees paid to serve the customers, rather than vice versa?[7]



[1] Curtis Lanoue, “The Problem with Uber,” Curtis Knows Nothing, February 26, 2014 (accessed March 2, 2014).
[2] Nick Bilton, “Disruptions: Taxi Supply and Demand, Priced by the Mile,” The New York Times, January 8, 2012.
[3] Nick Bilton, “Disruptions: Taxi Supply and Demand, Priced by the Mile,” The New York Times, January 8, 2012.
[4] Nick Bilton, “Disruptions: Taxi Supply and Demand, Priced by the Mile,” The New York Times, January 8, 2012.
[5] Bianca Bosker, “Uber Doubles New York Driver Pay to Get More Vehicles on the Road (And Eats the Cost),” The Huffington Post, October 31, 2012.
[6] Bianca Bosker, “Uber Doubles New York Driver Pay to Get More Vehicles on the Road (And Eats the Cost),” The Huffington Post, October 31, 2012.
[7] This point reminds me of my hometown’s local mass-transit municipal bus company. Managers made a more direct route (mislabeling it as “express”) from the edge of town to the city center, taking advantage of the fact that drivers going off-duty on their routes ending at the transfer center on the outskirts still needed to drive back downtown to the home terminal. Even so, one veteran driver liked his drive back after 5pm without any passengers on board, so he would violate company policy by showing the “Out of Service” designation and have the managers keep his return trip off the new “express” route. The lack of market competition meant that the managers could get away with protecting their drivers at the expense of riders. Such a mentality tends to pervade a company, which then reeks of a smallness befitting stubborn rigidity.  

Saturday, March 1, 2014

Putin’s Russia Invades Ukrainian Territory: National Sovereignty Turned Against Itself at the UN

On February 28, 2014, Ukraine’s UN Ambassador Yurly Sergeyev informed the Security Council that Russia had invaded the Crimean Peninsula, a semi-autonomous region of the sovereign state of Ukraine, by means of military transport planes, 11 helicopters, and trucks. In exchange for Ukraine having given up its nuclear weapons, Russia was treaty-bound to respect the sovereignty of the Soviet Union’s former republic. After briefly discussing whether Putin’s plan should have come as a surprise around the world, I want to take a critical look at the Russian president’s rationale for invasion in order to argue that political realism (i.e., the priority being on strategic interests at the expense of the world) and the related doctrine of absolute sovereignty are faulty maxims even as they are dominant still in the twenty-first century due to the staying power of pre-genetic-engineered human nature (e.g., “might makes right”).

Russian transport trucks inside Crimea in Ukraine. (Image Source: CNN)

Coming on the heels of the Olympics that showcased Russia to the world, the show of force must have come as a complete surprise to many people around the world. After all, on the day before the well-planned invasion, Russia’s ambassador to the United Nations had dismissed with laughter a journalist’s question on whether Russia was suddenly conducting its “military exercises” near Crimea as a precursor to an invasion. Indeed, Vitally Churkin even conveyed a sense of having been insulted by the very question! To be sure, any surprise may have been muted for those people who had watched the tape of two or three Russian policemen whipping the three young women of the musical group, Pussy Riot, for singing an anti-Putin song while jumping around on a sidewalk (as per youth’s natural vigor) in Sochi, Russia as the Olympic games were in progress. Indeed, some people may have felt an intuitive sense of that Olympics just not feeling right for some reason and so could in some measure feel vindicated in not having been manipulated by Putin (or the NBC television network in the US).[1]

As though by script known to only a few while the viewers are fed lies at the surface, well-armed men presumably Ukrainians but actually brought in by bus from Russia (among them being members of Vladimir Putin’s favorite biker group) took over the provincial legislature of the semi-autonomous region a day or two before the invasion. Once the Russian thugs were in control, the pro-Russian Crimean leader, Sergey Aksyonov, somehow found himself installed as the region’s boss. He “returned the favor” by “asking” Putin for help in maintaining peace.[2] The two-step dance by the emperor and his potential governor of Ukraine or at least Chrimea can be understood as an exercise in “mutual back-scratching” (or something far less fit for public viewing).

Meanwhile, the two men dancing where dancing was illegal at the time kept their mutually-satisfying engagement to the basement as they acted out a script upstairs to edify and assuage a global audience well-ensconced in slothful inertia (and its cousin, the “meeting”). Speaking with U.S. President Barak Obama on the first day of the invasion, “Putin stressed ‘the presence of real dangers to the lives and health of Russians who are currently present in the Ukrainian territory.’ Putin said that Russia reserves the right to defend its interests and the Russian-speaking people who live there,” according to the Kremlin.[3] Meanwhile, Ukraine’s acting President Oleksadr Turchynov insisted that any reports of Russians and Russian-speaking Ukrainian citizens in the Crimean region being at all threatened were pure fiction.  US President Johnson’s fictional “Gulf of Tonkin attack” rationale for sending troops to Vietnam in 1964 may come to mind as a similar fabricated incident serving as a façade for a pre-existing plan to escalate a war by adding force.

With Sergey Aksyonov essentially installed by Putin’s regime, the Russian president could not viably claim to have been invited into the territory. What about Putin’s claim of reserving his government’s right to protect Russian citizens in other countries? Would the United States be justified in invading the sovereign state of Mexico because Americans there are at risk due to the prevalence of powerful drug-cartels or because Americans are violating US laws by facilitating the drugs reaching American soil by aiding smugglers on the other side? No Americans had been at risk in Iraq when US President Bush manipulated Congress (and the UN) with faulty (or outright dishonest) “intelligence” reports of weapons of mass destruction in Iraq. Must the citizens at risk even be inside the malevolent country for an invasion to be valid? Surely international law does not confer Russia or any other sovereign country (including Ukraine) with the right to invade another sovereign nation simply because citizens or ex-compatriots there may be threatened. Notice that Putin’s claim of “real dangers” could only mean potential threat as of the day of the invasion. Surely a slippery slope comes into play if invasions can be justified on the basis of potential energy.

Having exculpated Putin’s façade, we are left with the naked aggression of an empire going after a former (member) state as per the empire’s own geopolitical and military strategic interests. The theory of political realism applied to international relations asserts that polities act only or primarily on the basis of their “self-interests.” Like a car ignoring the law in cutting in front of a cyclist or pedestrian, greater force can dismiss the constraint of parchment and impose its will without fear of being stopped by a lesser force.

What's that egghead doing quoting from Hobbes to explain Putin and Assad? 

In his seventeenth-century masterpiece, Leviathan, Thomas Hobbes warned  the world that a sovereign ruler can do whatever he wants, as he is constrained only by divine judgment in the afterlife (when it is too late for the king’s victims in this world). A century earlier, Jean Bodin, who also held an absolutist view of sovereignty, had that sovereigns are also bounded by divine law while still ruling. The absolutist notion of a ruler’s or nation’s sovereignty is not as absolute as modern-day defenders at the UN, such as Russia and China, conveniently. At least in Russia’s case, the doctrine is only to be selectively defended when violating it is not in Russia’s strategic interest. Fortunately for Russia, its veto-power in the UN’s Security Council is pliable enough to be used for both purposes—the inherent conflict of interest of a member of the council being allowed to veto resolutions to reign in the member’s own violations being somehow acceptable enough for the systemic or institutional design-flaw to be expunged from the land of the living.




[1] See “The 2014 Winter Olympics in Russia: ‘Where There’s Smoke, There’s Fire.”
[2] Chelsea Carter, Diana Magnay, and Ingrid Formanek, “Obama, Putin Discuss Growing Ukraine Crisis,” CNN, March 1, 2014.
[3] Ibid.

Wednesday, February 26, 2014

The Triangle Fire of 1911: A Story of Greed, Control, and Sadism in Business

If the standard business calculations and even greed are not sufficient to account for what occurs in the business world, perhaps we need to dig deeper in order to get to more subterranean motives that are not typically thought to surface amid the business fauna and flora. Did Richard Fuld, the CEO of Lehman Brothers when it collapsed in 2008, tell his subordinates to keep buying real-estate-based properties and securities because he was greedy? Was it greed that relentlessly pushed him to over-reach as repeatedly found Lehman to be wanting in comparison with Goldman Sachs? Rather than cutting into Lehman's over-dissected cadaver to look for pathogens besides greed, I engage here in a "dig" vicariously near Washington Park in New York City, at the site of a horrendous fire in a garment factory that occurred about a century before the implosion at Lehman Brothers. 

On March 25, 1911, 146 garment workers burned in the infamous “Triangle Fire.” The vast majority of the people who died—the youngest being 14 years-old—were women. Onlookers at street-level watched helplessly as 62 workers jumped or fell to the ground—many aflame as they plummeted. Louis Waldman, who would be elected to the New York Assembly, describes the scene as follows:

“Word had spread through the East Side, by some magic of terror, that the plant of the Triangle Waist Company was on fire and that several hundred workers were trapped. Horrified and helpless, the crowds—I among them—looked up at the burning building, saw girl after girl appear at the reddened windows, pause for a terrified moment, and then leap to the pavement below, to land as mangled, bloody pulp. This went on for what seemed a ghastly eternity. Occasionally a girl who had hesitated too long was licked by pursing flames and, screaming with clothing and hair ablaze, plunged like a living torch to the street. Life nets held by the firemen were torn by the impact of the falling bodies. The remainder waited [on the ninth floor] until smoke and fire overcame them. The fire department arrived quickly but . . . [had no ladders]  that could reach beyond the sixth floor.”[1]

The Triangle Fire in 1911. Why did NYC allow the construction of a building whose top floors were beyond the reach of existing fire ladders? (Image Source; wikipedia)

As policemen looked on helplessly, I wonder if any of them remembered beating those same workers a year before when the entire garment labor force in New York City went on strike in order to unionize. Max Blanck and Isaac Harris, the company’s owners, had paid off the police (and hired prostitutes) to attack the striking women. Adding insult to injury, the police would arrest them and tell the judge that the women had attacked them. Tellingly, Blanck and Harris held firm on the union issue even as the owners of the other companies capitulated on that pivotal point.

Blanck and Harris steadfastly believed that ownership of a factory meant that only they had the right of control not only over the terms of labor, but also what goes on inside the factory.[2] Hence, they were able to retain the industry norm of locking side exits so foremen could inspect the worker’s and their bags for stolen materials. Even though this policy doubtlessly came from the two owners, they subsequently claimed that they had not known the side doors were locked on the ninth floor and thus were not culpable as they made their way to the roof and onto another from the tenth floor. Incidentally, the foreman on the ninth floor managed to leave without unlocking any of the alternative exits. The owners evaded a criminal manslaughter conviction by discrediting a credible worker-witness, but they would have to pay $75 per victim, which the insurance settlement more than covered with $60,000 to spare.[3] In short, the owners who had singularly resisted unionization actually made out rather well from having defeated their workers’ demand for a safer workplace.

To be sure, winning on the union point was not necessary for an agreement on safety, as the two owners agreed to reduce workweek hours and increase wages. I submit that greed and the resulting unethical policy and conduct may not suffice in getting to the bottom of this tragedy. Far less obvious than the mangled, bloody pulp on the sidewalk is the owners’ shared mentality. Although a level of industry competition fit for Adam Smith’s The Wealth of Nations motivated Blanck and Harris to incessantly strive to reduce costs, including the labor cost of production, a fixation on being in control certainly of their “stuff” and even other people—almost to the point of viewing the workers at work as part of the “stuff”—may have surpassed even greed as the underlying motivation or even obsession. Certainly the owners were unique in the garment industry then in the extent to which they refused to admit a union during the strike in 1910; unionization represented to them an affront to their total control.[4] In other words, Blanck and Harris may have had “control issues.”

Even so, “being the boss” may not get us far enough down in our archeological dig. In 1913, Blanck was arrested again for locking the door in his factory during working hours.[5] In retrospect, the discredited worker who had testified two years earlier on the Triangle factory fire must have felt some vindication, at least concerning Blanck’s association with the short-sighted policy. The fine of only $20 unlikely had much impact on Blanck in his second venture, the workers of which could have little faith in the gilded justice of the courts and the moneyed laws of the legislatures.

One of the floors on which Triangle sewers worked. (Image Source: YouTube)

For our purposes here, that Blanck “just didn’t get it” even after the horrific tragedy in 1911 points to a sordid mentality beyond even a rather extreme control-fixation coming out of an inner sense of insecurity or emotional instability. The sickness also manifests in Blanck’s (and Harris’s) decision in 1910 to start the violence by paying prostitutes and officers of the law to beat workers on the picket line as though the two owners themselves had been attacked. Can we really say that they were not somehow involved in starting the fire, even if indirectly through a foreman putting a lit match in a scrap bin on the eighth floor? After all, the owners and foremen made it out of the building relatively quickly, and they already knew how to subvert officers of the law (both police and judges) so respect for the law would not have been an obstacle. The prospect of a nice insurance settlement may have also been in the mix, even if the money were secondary to the fuming desire to inflict still more pain on the workers who had presumed even to question the bosses’ (right of) control. I suspect the owners viewed the workers as subhuman in a sense, certainly not worthy of respect as fellow human beings.

In short, a certain sadism may enter into the equation as the desire to see those whom the owners viewed as inferior suffer for having dared resist the total control and insist on a share as a unionized workforce. I suspect the owners viewed themselves as the parents (or adults) and their workers as their children (based on level of income and being immigrants) even though this family picture breaks down even as a metaphor when the workers leave work. As “parents,” Blanck and Harris must have been jolted in 1910 as they finally had to encounter the “daughter” they had always excluded from the family (i.e., labeling her as a “black sheep” and so informing, or forming, the other family members as if supporting actors). The system works for the family’s dominant coalition and its enabling stakeholders (e.g., owners, foremen, suppliers, police, and the courts) by shielding them from their own pathologies. By 1910, the “daughter” had grown up sufficiently in self-confidence to recognize the ruse and insist, even at the risk of starvation (i.e., being estranged from the only family/normal she had known), on a share in the control governing and structuring her relationships with those who by then had become well ensconced in monopolized control. Blanck and Harris (two gay parents?) must have felt humiliated as their conveniently labeled “problem child” began to relate to them as one adult relates to another. A warped perspective maintained over years from the sheer willfulness of an underlying pathology can withstand the onslaught of reality with remarkable stubbornness. Hence, Blanck maintained his “locked door” policy in the wake of a horrific showing of reality.

The force of a warped mind engaged in business can overcome resistance from even greed; turning strikers into resentful victims (and perhaps even burnt corpses) is not exactly good business (i.e., financially as well as ethically). Reducing business to its financial element, treating it as the basis of business, not only enables Blanck’s and Harris’s absolutist notion of private property (the analogue in government being absolute national sovereignty), but also discounts or dismisses outright putrid motives that may reach further down than greed in the recesses of the mind, where hypertrophic (exaggerated) subterranean emotional monsters can evade the light of day by as they slither about in the river Styx.



1. Louis Waldman, Labor Lawyer (New York: E.P. Dutton & Co., 1944), pp. 32–33. If you are a writer or interested in improving your writing, the following sentence from the quote above provides a good example of what not to do. Waldman writes, “Life nets held by the firemen were torn by the impact of the falling bodies.” This sentence is in the passive voice (e.g., It was done by him). The passive can be used to emphasize a noun that would be the direct object in the active voice. Did Waldman really want to emphasize the life nets? “Falling bodies” fits better with the emphasis in the paragraph. Try this out for size: “The falling bodies tore through the life nets being held up by firemen.” Here, I want to emphasize the life nets more than the firemen, so I have used the passive voice in the subordinate clause. There is indeed a place for the voice, but only strategically rather than as a habit (often gained from using the device to evade responsibility rather sheepishly (e.g., “You will be asked to show I.D.” rather than “I/We will ask you for your I.D.”). Little people finding themselves with some power tend to find the allure of passive aggression too tempting to resist. Hence Maggie Smith’s line on Downton Abbey, “We give these little people some power and it goes to their heads like strong drink.” Notice the active rather than passive voice here as the Dowager Countess pushes back against the lower passive aggression. Part of my intent as a writer is to make the subterranean agendas transparent so we all know what is really going on rather than continuing to be beguiled by mere subterfuge primped up like some tropical bird.
2. Interestingly, 21 years later, Adolf Bearle and Gardiner Means would pen The Modern Corporation and Private Property in order to present their thesis that ownership (i.e., the stockholders) had become separated from control (i.e., the management) in the large-scale corporation-form of business enterprise. Blanck and Harris both owned and managed their company, and thus viewed the two as rightfully fused.
3. John M. Hoenig, "The Triangle Fire of 1911", History Magazine, April/May 2005.
4.Triangle Fire,” American Experience, PBS (aired February 25, 2014.
5. Hoenig, “The Triangle”

Sunday, February 23, 2014

Changing Governments and Time Zones: Why So Difficult?

Ever wondered why so much energy must be expended to dislodge a long-established institution, law, or cultural norm? Why does the default have so much staying power? Are we as human beings ill-equipped to bring about, not to mention see, even the “no-brainer” changes that are so much (yet apparently not so obviously) in line with our individual and collective self-interest? In this essay, I look at Ukraine, Spain, and Illinois to make some headway on this rather intractable difficulty.

Ukrainian President Yanukovych refused for months to budge then suddenly disappeared as if a teenager fleeing from a now-likely punishment. (Image Source: AP)

In Kiev’s central square, Ukrainian protesters braved bitter cold for months in late 2013 and early 2014 without any movement whatsoever in disgorging a divisive president who may gone on to surrender Ukraine’s sovereignty for money  in line with Vladimir Putin’s imperial dream of a restored Russian empire under the rubric of a “Eurasian Union.” It took twenty and then seventy deaths before the steadfast protesters would see the president replaced by an interim parliament-centered coalition government. After months of stalemate, the president actually fell from power quite suddenly once his partisan support in parliament had sunk below a threshold.[1] Until that point was reached, any trickles of power shifting behind the scenes did not register in the least as even a slight movement toward a resolution in the massive tug-of-war. Such ongoing intransigence, or gravity, that seemingly inheres in a default is itself an obstacle that can easily dissuade anyone who comes to view the way things are as not only contingent, but also, well, rather stupid. Such an individual might wonder why societal self-corrections in the public interest are so elusive even though they are rather obvious.

Even realizing that a given domain is subject to the rigid longevity of invisible sub-optimality can be difficult to achieve. For example, only after seven decades did the E.U. state of Spain seriously reassess Franco’s decree on May 2, 1942 moving Spain from the GMT time zone, which Spain had adopted at the International Meridian Conference in 1884, to GMT +1. Falling back an hour would put the dictator on the same time as Hitler’s Germany (and France) and Mussolini’s Italy. Seven decades later, in October 2012, the VII National Congress for Rationalise Spanish Time Zones proposed returning to GMT. With more daylight in the morning and less in the evening, state residents might not stay up so late on work-nights. Once the state had been bailed out by the E.U. federal government after the financial crisis of 2008, Spain could ill-afford the continued loss of 8 percent of the state’s GNP due to productivity losses from the nocturnal proclivity that coincided with another cultural icon, the siesta. For our purposes here, why did it take decades even to propose the easily-rationalized correction even though it meant returning to a rule that had been in effect for decades before the rise of Nazi Germany.

Let’s travel across the Atlantic Ocean to Illinois, whose major metropolis is the bewindowed city of Chicago. The latest sunset there is at 8:30pm (20:30 hrs), which occurs during the last week of June. The sun rises during that week at around 5:15am, though relatively few Chicagoans are awake at 4:45 to witness daybreak. Bentham’s rule of utilitarianism would have us believe that the greatest good for the greatest number somehow matters in life. Might it be rather obvious that taking an hour of daylight during the summer from early morning and depositing it at the other end of the day, prolonging evening from turning into night, would be more optimal? Perhaps it is merely common sense that many more people could enjoy the hour of light in question were they awake to see it. This point would not be missed by many tourists from Spain. Why is it so difficult for the people losing out to become aware of what they could have in a better life?

Even moving another hour of daylight, such that sunrises would occur roughly between 7:00am and 7:30 during June and July (daylight before 7) and sunsets would be after 10:00pm (22:00hrs) would not unduly fine “morning people.” Yet this would mean a three-hour shift from standard time. Achieving even a 9:30pm sunset would entail a two-hour change (not necessarily on the same days). Lest such a proposal seem too catastrophic, the PSOE, a political party in Spain, established the addition of another hour in summer beginning in the 1980s.

Perhaps the fear of the unknown is assuaged by the news of the same unknown being part of the default somewhere else. Furthermore, perhaps what does not work in a state of one Union may work just fine in a state in another Union; even within an empire-scale Union diversity of clime and custom justify allowances for interstate differences (e.g., via federalism).

Perhaps, moreover, members of the homo sapiens species are “hard-wired” to prefer “missing out” in the face of even a relatively simple change that would add appreciably to the good for the greatest number. In this case, the good to be had is in terms not only of summer enjoyment in a clime whose long winters can keep the door open to extreme cold from the Arctic, but also of improved health (from more exercise en plein air) and greater safety (i.e., fewer muggings and rapes). Perhaps the over-riding lesson here is simply that making life a little better—a bit more enjoyable—need not be so difficult.

Dorothy, in The Wizard of Oz, could have used the magic slippers to return to Kansas at anytime. Unfortunately, she did not even ponder the possibility, and thus had to come to it the hard way. 
(Image Source: Hollywoodreporter.com)


I am reminded of Dorothy in the 1939 film, The Wizard of Oz. The good witch of the North tells her at the end that she could have used her ruby shoes to return to Kansas at any time, but that she had to come to realize this herself. Perhaps the question for us is why we have such trouble in coming to realize that we, too, need not wait so long to effect change that we could have accomplished long before because it lies within our power. The pickle in all of this is that enough people in a given society must come to this self-empowering realization themselves for any movement to take place. For once a threshold is met, even a societal change can be effected surprisingly fast and much easier than expected or feared.

1. Jim Heintz and Angela Charlton, "Ukraine Parliament Boss Takes Presidential Powers," GlobalPost, February 23, 2014.