Archive for April, 2008

A Response to Mankiw’s “The Macroeconomist as Scientist and Engineer”

Thursday, April 10th, 2008

Mankiw begins his endeavor by relating the field of economics to that of a science, whether it’s a social science or a hard science is up for debate. Nevertheless, he feels that it should be termed a science so that the undergraduates starting out in this discipline don’t mistake it for a lot of haphazard guessing of what is the world and how the world should be based on policy decisions. It’s a science for one simple reason: “economists formulate theories with mathematical precision, collect huge data sets on individual and aggregate behavior, and exploit the most sophisticated statistical techniques to reach empirical judgments that are free of bias and ideology.” Economics is also considered a type of engineering because economics was developed to also solve practical problems. Mankiw sets up his paper to trace the history of macroeconomics and evaluate what we have learned. His goal throughout this paper is to show that macroeconomics was created out of two mindsets–those who view macroeconomics as a science (understanding how the world works) and those who feel it is a type of engineering (an application/tool to solve problems). He concludes the introduction by claiming that macroeconomics started out as an engineering discipline where people attempted to solve problems and only in the last several decades did it become a science where theories and tools were developed, with little or no practical application (though others would beg to differ I’m sure).

Macroeconomics first appears in literature during the 1940s due to the Keynesian Revolution. Many nobel laureates (i.e. Solow, Klein, Modigliani, Samuelson, and Tobin are specifically named in the paper) point to Keynes’ General Theory as their starting point in the field of macroeconomics. This influential book probably would not have been published had it not been for the Great Depression because “there is nothing like a crisis to focus the mind.” Keynes’ General Theory left a lot of questions unanswered, especially the question as to what model tied together all of his thoughts. This spurred others to continue in this field of economics, with early attempts left up to Hicks and Modigliani to develop a more clear model of the macroeconomy using the IS-LM curve. Though critics say it’s too simplified for the macroeconomy, the whole concept of the IS-LM model was to simplify a “line of argument that was otherwise hard to follow.” In that respect, the IS-LM model did its job. It’s just not the entire story. By the 1960s, there were many complex simultaneous equation models that were developed in order to forecast and evaluate the effectiveness of policy, with the framework still being used by the Federal Reserve’s U.S. model to this day. As Mankiw points out, the science of economics became engineering of economics starting in the 1940s when the theorists behind macroeconomics wanted to put their ideas to use and worked as advisors to the presidents to formulate policy.

Classical undertones stemmed from Keynesian economics with the onset of monetarism and new classical economics. Monetarists attacked the Keynesian consumption function because Milton Friedman theorized that the marginal propensity to consume would produce much smaller multiplier effects throughout the economy than Keynes’ model predicts. Mankiw does make it a point that though Friedman’s idea regarding the transparent and easy-to-understand rules by the Federal Reserve doesn’t have a strong following, it was a precursor to other countries’ central banks as they have established bands around which inflation rates can move. During the late 1960s, the development of the Phillips curve was able to better complete the Keynesian model that lacked a lot of theory. Though Keynes knew there was a relationship between unemployment and inflation, not much was able to be said about this topic. Friedman, however, recognized that there remains a difference between short-run and long-run tradeoffs. In the short run, the Phillips curve relationship holds water because inflation may be unexpected or unanticipated and therefore, unemployment will be able to decrease. However, in the long run, this relationship isn’t a strong one because of expectations, which was a huge step forward in the field of macroeconomic theory. Rational expectations, especially the Lucas Critique, was able to be strengthened from Friedman’s introduction of expectations. The Lucas Critique says that “Keynesian models were useless for policy analysis because they failed to take expectations seriously.” Lucas continues his argument in which he claims that the economy has rational economic agents who have imperfect information. Markets will clear, but monetary policy may get in their way because all monetary policy does is confuse people about the difference between absolute prices (nominal) and relative prices (real). Real business cycle theorists were the third wave of new classical economists to branch off of the Keynesian Revolution. Like the rational expectation theorists, RBC economists assumed markets clear instantly, but where they differed is that they looked at monetary policy as ineffective, and thus, it wasn’t in their business cycle model. Rather, business cycles were traced out due to random shocks of technology and the resultant intertemporal choice between work and leisure–the determinant of unemployment.

New Keynesians came onto the scene and looked to the issue of microfoundations because all microeconomics courses taught their students that firms and households look to maximize and promote efficiency in market clearing. However, these new Keynesians realized that there is a time element involved in market clearing. Not to say that markets won’t clear, but it won’t be instantly as assumed by earlier new classical economists. Rather, markets won’t clear in time period t because of sticky wages and sticky prices, especially seen in the labor market where wages adjust sluggishly over time as a result of labor contracts. New Keynesians were “divided” into two early waves, those that looked to understand allocation of resources when markets don’t clear in one time period and those that looked at rational expectations and market clearing. These first two waves failed to come up with conclusions as to why sticky prices and wages hinder markets from clearing instantly. Thus, a third wave of new Keynesians came onto the scene and answered these earlier questions by saying firms face menu costs when changing their prices, pay their workers efficiency wages above the market level to increase productivity, and that decision making deviates from rational thinking. Mankiw evaluates macroeconomic theory up to this point and argues that as a science, macroeconomics was successful. So, how was it as a engineering discipline in evaluating policy? He suggests that the answer is much less positive.

Long-run growth, rather than short-run fluctuations, became a hot topic in the 1980s and early 1990s because of three things: there was an ever-increasing gap between rich and poor countries, cross-country data became more readily available, and the U.S. economy in the 1990s was experiencing insurmountable growth. The clash and sort-of-schism between new classicals and new Keynesians was getting larger, but as these older economists were retiring, newer and more civil economists were stepping on the scene, which looked to “improve” the image of macroeconomics. One way in evaluating the degree to which economics is an engineering discipline is to look at Laurence Meyer’s A Term at the Fed. As a professor, Meyer served one term as the governor of the Federal Reserve. The book was a way for people to see the approaches taken in analyzing the economy. The bottom line is that work by new classicals, new Keynesians, and others have had “close to zero impact on practical policymaking.” Who is to thank for this? For one thing, there seems to be a confusion in the field of macroeconomics. While the Federal Reserve is independent, it doesn’t create policy rules, as was proposed by Friedman so people have the expectation of a the rate of money supply. Mankiw views inflation targeting, which is a policy rule implemented by the European Central Bank (ECB), as a way to “communicate with the public” rather than a rule that was introduced out of macroeconomic theory. What about low rates of inflation? Countries that have imposed inflation bands by their central banks and those who have not (i.e. United States) have both experienced low levels of inflation for a long range of time. The answer for this could be one of two things. Either supply shocks aren’t prevalent like they were during the oil crisis of the 1970s or because central banks have realized that high levels of inflation, as experienced in the 1970s, should be avoided at all costs because it is detrimental to the economy.

The other side of the coin is looking at the effectiveness that macroeconomic theory has had on the practical applications of fiscal policy. Bush’s tax cuts aimed at consumption rather than income is consistent with literature in public finance, especially with Atkinson and Stiglitz of the 1970s. The short-run analysis of tax policy is consistent with Keynesian economics because less taxes means more disposable personal income, which will lead to higher demand of goods and services. In conclusion, Mankiw views economics as more of a science than as an engineering tool. The reason is not because the Federal Reserve and government ignore the new ideas and theories developing in the field of macroeconomics. Rather, “modern macroeconomics research is not widely used in practical policymaking [because there is] little use for this purpose.” Undergraduates, though, are more like the engineer than the scientist. Except for the few who want to pursue economics in the academia (i.e. science), the majority of undergraduate students want to see how macroeconomic tools can be applied to the real world for effective policymaking (i.e. engineer).

Source: Mankiw, N. Gregory. 2006. The macroeconomist as scientist and engineer. Harvard University (May): 1-26, http://www.economics.harvard.edu/faculty/mankiw/files/Macroeconomist_as_Scientist.pdf (accessed April 10, 2008).

A Response to Hoover’s “Is Macroeconomics for Real?”

Tuesday, April 8th, 2008

Kevin Hoover’s basis for writing this piece “Is Macroeconomics for Real” comes from comments that have been written anonymously on his class evaluations.  Many of the students side with the commonplace among economists that macroeconomics isn’t “real” because it cannot stand alone.  Rather, it is based off of microfoundations, something that has been seen time and time again through many readings.  Oftentimes, older macro theories are revoked because they don’t incorporate microfoundations, such as utility maximization functions and other maximizing behaviors.  Hoover, however, argues that macroeconomics is a “stand alone” discipline that cannot be reduced to a microeconomics form.  Hoover starts with basic definitions for macroeconomics and microeconomics, defining micro to be the “economics of individual economic actions” and macro to be “the economics of broad aggregates.”  Hoover continues, though, by saying that Keynes didn’t define these two terms as was just mentioned above.  Though he didn’t use the terms macro and micro when making the distinction, to him [Keynes], microeconomics was the “theory of the individual industry or firm” and macroeconomics was the “theory of output and employment as a whole.”  Macroeconomics has expanded beyond Keynes’ definition, but the aggregates that he referenced still refer to GDP, unemployment, interest rates, the flow of financial resources, etc.

I will attempt to answer Hoover’s question by summing up his claims, even with his philosophical undertones.  He references Uskali Maki (1994) to define and answer the “real” in the title of his article.  Maki looks at ontological and semantic realism and says the difference lies with the fact that ontological realism is “what there is” and semantic realism looks at the connection between “language and what there is.”  Remember, Hoover’s claim is to see if macroeconomics can remain independent from microfoundations.  Through his investigation and questioning, he determines that macroenomic aggregates “exist externally”–that is, they don’t rely on microfoundations, which is a huge shakeup from mainstream economic thinking since the 1940s.  Lionel Robbins (1935) makes a blanket statement that “economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.”  From this statement alone, economics is about the individual, a microeconomic slant.  Keynes, however, developed the modern theory of macroeconomics with three main equations: the consumption function, which looked at aggregate consumption patterns in relationship to national income; investment on the basis of interest rates; and the liquidity preference.  Thus, going solely off of Robbins’ definition, Keynes’ contributions would be invalid because it doesn’t base its models off of the behaviors of the individual.

Hoover continues by looking at Mark Blaug’s (1992) individualism, which makes the claim that explanations of “social, political, or economic phenomena” cannot be explained without the understanding of the decisions of the individual.  Even Augustine Cournot of the 19th century says that “there are too many individuals and too many goods to be handled by direct modeling.”  Blaug, nevertheless, goes on to observe that “few explanations of macroeconomic phenomena have been successfully reduced to their microfoundations.”  Robert Lucas (1987) is a strong supporter of the individualism principle and the idea of microfoundations.  His colleagues and he have worked extensively on new classical economics that assumes that representative economic agents (the individual) make decisions to reach their optimal choices.  Essentially, macro theory must use these fundamental microeconomic elements (i.e. utility maximization, consumption maximization, etc.) in order to have any validity.  A. P. Kirman (1992) criticizes the idea of a “representative agent” because it fails to represent actual individuals.  Individuals inherently seek to maximize their utility and consumption, but without rationally modelling and plotting out their optimal points along a budget constraint.  David Levy (1985) has the same logic because information isn’t perfect.  As previous blogs have alluded to and directly mentioned, assumptions for some of these macro theories, though built upon microfoundations, doesn’t hold water because their assumptions are too “naive” and simplistic.  Though it helps with the model, what good is a model that doesn’t accurately capture actual observed behavior?

Hoover’s next set of arguments is based around the “validity” of the macroeconomic aggregates.  Nobody doubts that GDP, unemployment, and interest rates are interconnected.  People do disagree, however, that these aggregates are the “fundamental units” that constructs this economic reality.  Hayek (1979) says it best that these entitites are secondary because these entities cannot be explained and fully understood without having an understanding of the individual components.  This statement reverberates through all the macro theory that presented criticisms towards other theories for failing to be based upon microfoundations.  Nevertheless, even Hayek doesn’t believe in the pure definition of individualism, citing the Cournot problem.  (CAN SOMEBODY PLEASE TELL ME WHAT IS THE COURNOT PROBLEM?  I think it has to do with being unable to model an entire economy because it’s too complex, but I’m not sure.)

Referring back to the aggregates that consumes macroeconomics, Hoover states that there are two aggregates: natural and synthetic.  Natural aggregates are those that are simple sums or averages, such as total employment or an average interest rate on commercial paper/Treasury security for a certain period of time.  Hoover says that he terms them natural because they are calculated in the same units in which the individual units are also calculated.  The other aggregate is synthetic.  Synthetic aggregates are those that are “fabricated out of components” and therefore have a different structure.  The main example here is the aggregate/general price level.  An average of all prices will not work because apples and oranges cannot be added together.  The ultimate goal is to find out the price of money to see what something is worth in real terms.  Again, this is difficult to accomplish because the overall economy is complex and there would have to be thousands of equations to capture all of the movements in the economy, which is next to impossible and very, very time consuming.  The story goes on to discuss the indexes that have to be constructed.  Indexes give insight into general price levels because once again, percent changes of certain goods and services will weigh more heavily on the overall economy and “price of money” than other goods and services.  For instance, Hoover says a change in the price of gasoline will have a larger impact than the change in the price of caviar.  Thus, indexes reflect weights that have to be applied to certain industries and sectors of the economy.  (As a side note, PPIs and CPIs are calculated with and without food and energy because these two areas of the economy are the most volatile and will have a large impact on what is the perceived rate of inflation.)  This same thought process holds true for the need to calculate real GDP.  Price changes are bound to occur.  Therefore, nominal GDP will always increase, even if quantity does not change.  Therefore, real GDP is needed to see whether prices changed and the level of output did not change, or whether the economy experienced an increase in output due to more efficient methods.  If the latter is true, then real GDP will go up.  If the former is true, then it can be expected that only nominal GDP will increase due to the rise in prices.

The following section discusses supervenience, which I don’t understand.  I will quote the passage, but cannot provide insight only because it does not make sense to me.  On page 12, Hoover says “Macroeconomic aggregates I believe supervene upon microeconomic reality.  What this means is that even though macroeconomics cannot be reduced to microeconomics, if two parallel worlds possessed exactly the same configuration of microeconomic or individual economic elements, they would also possess exactly the same configuration of macroeconomic elements.”  The reverse doesn’t necessarily hold true.  On a different note, Hoover discusses irreducible aggregates and their ability to be manipulated.  Some macroeconomic aggregates cannot only be controlled, but “can be used to manipulate other macroeconomic aggregates (i.e. real interest rates and price levels and their effect on yield curves).  He ends the paper by stating that this paper only attempted to show the current behavior and interplay between macroeconomics and microeconomics.  Hoover does go on to mention that there are macroeconomic aggregates that are irreducible, and consequently, cannot be built upon microfoundations.  Therefore, these entities are indeed “real.” 

Source: Hoover, Kevin D.  1999.  Is macroeconomics for real?  University of California-Davis (June): 1-22, http://users.umw.edu/~sgreenla/e488/Macreal.htm (accessed April 8, 2008).

A Response to Wynne’s “Sticky Prices: What is the Evidence?”

Thursday, April 3rd, 2008

Mark Wynne attempts to look at the evidence behind changes of the stock of money and whether this has implications for the overall economy (i.e. employment, growth rates) in the short run.  This has obvious implications for effectiveness of monetary policy because through open market operations, the Federal Reserve controls the money supply.  For over two hundred years, this issue has been “debated” and the only conclusions that seem to be able to be drawn from this is that “prices are ‘sticky’ at nonmarket-clearing levels.”  This will directly effect the real factors of the economy.  Suppose that people were magically inundated with more money than they had before.  Also, suppose that this increase in the money supply was a one-time, unexpected policy.  Since it can be assumed that each person was holding the optimal amount of cash previous to the increase in the money supply, this excess cash would be spent on stuff.  However, if everybody spent their excess cash holdings to return to their optimal holding of cash (which existed before the increase in the money supply), nothing will motivate the producers to put out more output.  Thus, the long-run result would be an increase to the price level by the same proportion that the money supply was increased.  New Keynesians, however, are interested in the “transition stage” that occurs between the event that got the economy out of disequilibrium until the time that the economy has restored equilibrium.  This transition stage, according to Wynne, could see one of two scenarios–either an instantaneous increase in the price level, which would end the story, or a rigidity of prices.  The rigidity of prices is the more interesting situation.  If some producers are slow to raising their prices, due to the menu costs and other situations that were discussed in class (even though nominal demand has increased with this excess amount of money in the economy), then output in the short run may increase without an increase to prices.  This increased output would show up as a real increase in the short run until all firms had a chance to raise prices equal to the growth rate of the initial increase in the money supply.  Wynne mentions in the introduction that his article will focus on sticky prices rather than wages because many analysts view the failure of wages to adjust to changes in the economy as wage stickiness.  Rather, this “rigidity” is due to labor contracts that are spelled out, which requires the wage to be paid out in installments in the form of paychecks.  Therefore, it is because of this locked-in labor contract that wages don’t adjust as often as prices.

Wynne’s earlier study with Sigalla in 1993 concluded that raw data that are used to compile the producer price index and the consumer price index are often list prices instead of transaction prices.  There are two answers for this practice.  One is so firms protect themselves against potential antitrust litigation and the other reason is so these spelled out prices don’t fall into the hands of competitors.  In order to get around this dilemma, the BLS will take the average sticker price of various stores and the average discount or coupons associated with the purchase of this product.  This averaging of raw price data makes for a difficult time in assessing the flexibility of the prices.  Since some average prices fluctuate more than their “constituent price series,” this, too, will make for an unreliable estimate as to the flexibility of prices.  Wynne points to the earliest studies of the frequency of price changes, which was conducted by Mills (1927).  He developed a wholesale price index (WPI) in which he recorded 206 commodities.  The WPI ranged from 0 to 1 with an index value equal to zero if the price never changed over the period monitored and a value of one if the price changed every period recorded.  The shapes of these graphs were U-shaped, that is, there were a lot of commodities that didn’t exhibit price changes over the recorded time frame and a lot of commodities that exhibited price changes almost every period.  There were fewer commodities that fell in the middle range.  The products that exhibited the most price changes were farm products.  (An interesting note is that during WWI, the WPI graph didn’t show a U-shaped distribution, but rather an even distribution of commodities exhibiting ratios in the middle of the graph and a lot of commodities at the right-hand side of the graph.  The two criticisms of Mills’ work are that he used averages and used list prices rather than transaction prices.

Cecchetti’s (1986) study of magazines is a good example of price stickiness.  He was able to get away from the criticisms that plagued Mills (1927) because magazine prices are transaction costs and there are few discounts associated with magazines.  His sample period from 1953-1979 suggested high price stickiness because real costs were decreasing as nominal prices were increasing during high periods of inflation in the 1970s.  Therefore, he concluded that menu costs–that is, these fixed costs–were very high.  Nevertheless, his study had other shortcomings that Mill didn’t experience.  He looked at newsstand prices of magazines, but it is recognized that many people buy a subscription for a magazine, which is similar to the criticism of labor contracts.  That is, these individuals enter into a contract with the magazine company for a year and many times, subscriptions allow for customers to receive discounts.  This commonality, unfortunately, isn’t reflected in Cecchetti’s 1986 study.

Koelln and Rush (1993) look at whether controlling for quality of a product affects price rigidity, something for which Cecchetti couldn’t control.  Looking at magazine data from 1950-1989, the two conclude that Cecchetti’s study of price rigidity was overstated.  Looking at the number of pages of text, they were able to conclude that as inflation “erodes the real price of the magazine,” the number of pages of text will decline.  Therefore, this “price rigidity” can be confused with the declining quality of the product.  Carlton (1986) revisits Stigler and Kindahl (1970) in which the two looked at transaction costs rather than list prices of various industrial commodities.  Stigler and Kindahl (1970) collected data from buyers and not from sellers because buyers have less of an incentive to report list prices.  Thus, Carlton concludes that industrial commodities, especially industries dealing with steel, chemicals, and cement kept prices unchanged for a period of at least one year.  Other studies that have been conducted were those in the retail business.  Kashyap (1991) looked at retail catalogues and concluded that nominal prices remain unchanged for periods of at least one year and when prices do change, both the magnitude and number of changes is irregular.  Blinder (1991) conducted interviews with firms and found out that fifty-five percent of the firms interviewed claimed to have changed their prices no more than once a year, with only ten percent claiming to change prices monthly.  An interesting note from Blinder’s study is that three-fourths of the firms will change something other than price (i.e. delivery lags, quality of products) when demand is tight.

There have been some overall assessments of price stickiness.  Many studies mentioned by Wynne only have to do with a small fraction of the country’s GDP (i.e. magazines).  Other studies deal with intermediate products rather than finished products (i.e. industrial companies).  Lastly, of most importance is the price rigidity studies that actually deal with transactions involving money.  Since many products are bought via credit, it doesn’t represent the demand for money, and consequently, these studies will not determine whether money plays an important role in price rigidities.  Wynne also brings it to the attention of the reader that studies, such as Cecchetti and Stigler-Kindahl reaffirmed their theories of price rigidities rather than looking for price rigidities.  What I mean by this is that these two studies picked areas of the economy where it was already hinted at that prices were already inflexible and thus, the studies produced biased results that reaffirmed, rather than proved, that prices in these markets were rigid.  Carlton (1983) also criticizes the studies done on price rigidities.  For instance, it was known that there were price controls during WWII that held nominal prices at a constant level.  However, to get around this, the quality of the products being offered were decreased.  Thus, in a sense, the products were no longer homogeneous because of the varying qualities of the products being assessed.

Wynne concludes that there is little evidence to suggest that prices are sticky in the overall economy.  With all the thinking and assumptions of price stickiness, he was shocked that only three studies were able to be produced that showed actual price stickiness.  There are ways to deceive the idea of price stickiness by either withholding delivery during a heightened demand or by lowering the quality of the product.  In essence, just because markets take longer to clear than in a Walrasian auction, doesn’t mean that the evidence points to price rigidities.  To go back to the original question regarding the effectiveness of monetary policy and its effects on the real side of the economy–only a small degree of price rigidity needs to be in place for those external, monetary shocks to be able to trace out the observed business cycle.  Even if all prices were deemed flexible, monetary policy could still affect the real side of the economy–the shocks would simply then come from macro market failures or market incompleteness.

Source: Wynne, Mark A.  1995.  Sticky prices: What is the evidence?  Federal Reserve Bank of Dallas Economic Review (1st Quarter): 1-12.

A Response to Greenwald and Stiglitz’s “New and old Keynesians”

Tuesday, April 1st, 2008

 Greenwald and Stiglitz start off by making three claims upon which old and new Keynesians would agree–there will be an excess supply of labor for a given market wage, the aggregate level of output will fluctuate at a greater magnitude than what can be accounted for by short-run changes in technology, and money matters but monetary policy can and has been proven ineffective during certain periods of time (i.e. Great Depression).  Nevertheless, what is different from new classicals is the notion that government intervention via policy decisions can be effective some of the times.  From the start, the two authors make the comparison to new classical and RBC model theorists.  Those schools of thought conclude that all markets clear in one time period; there isn’t the presence of sticky prices or wages; unemployment is voluntary, which is shown by changes of supply and demand shifts in the labor market; and that there aren’t macro market failures, which allows for the efficient responses to changes in externalities (i.e. shocks).  As noted by Greenwald and Stiglitz, the only difference between new classicals and RBC theorists is the shocks that affect the aggregate output of the economy.  For the new classicals, it is shocks to the money supply whereas the RBC focus on technology shocks.  Nevertheless, the two schools of thought, though basing their macroeconomic models on microeconomic foundations or “microfoundations,” assume that firms interact in a perfectly competitive market, there is perfect information, there are no transaction costs, and there is no risk assumed by economic agents since all individuals are homogeneous.  Greenwald and Stiglitz end their introduction with a few questions that look at the “validity” of these earlier macro models.  Some things that cannot be answered by new classicals or RBCs are why there are variations in the number of work hours, why do some industries see higher rates of layoffs, and why are investment and inventories in certain industries so volatile.

The article’s jumping off point is dealing with price rigidities–both nominal and real.  The background behind these observed rigidities in the market is due to the fact that it has been observed that markets don’t clear in one time period.  If they did, this would imply that prices and wages were flexible and resulting from this would be the notion that whenever the market encountered a shock it would adjust instantaneously and maintain full employment and output at its potential.  However, this isn’t the case, which is why there is the discussion regarding these inflexibilities.  According to the authors, the markets benefit from having rigid prices and wages because it lessens the volatility and magnitude of the fluctuations in the economy.  To explain the rigidities of these prices and wages observed in the market, Greenwald and Stiglitz introduce three basic ingredients that are all found in markets that have imperfect information.  These three ingredients are as follows: risk averse firms, a credit allocation mechanism in which risk-averse banks play a central role, and new labor market theories that include “efficiency wages and insider-outsider models.”

Risk averse firms have two options: either issue equity or debt instruments.  There is much less risk with issuing equity because these firms share the risk with those who provide the finance.  Issuing debt, on the other hand, means that the firm issuing this has an obligation to repay and thus can face the risk of going bankrupt.  So it seems obvious that firms would issue equity, but there is a negative side, as pointed out by the authors.  They view the issuance of equity as negative because the market perceives it this way.  The market’s opinion is that the “worst firms” will be the ones most likely to issue equity because those firms may be overvalued and are trying to sell additional shares.  Why are firms risk averse?  The answer is that managers control firms in that manner.  This is because those individuals are more aware of the status quo and less able to predict what will happen if the firms changes its action (which is termed “instrument uncertainty”).  Just as described with modern portfolio theory, firms assess various portfolios to assume the least amount of risk for a given return, or vice verse.  If prices change, so, too, will the actions of the firms and their resultant portfolios–either by changing the price it charges or the quantity it produces to keep customers content.  The example given as to why firms are risk averse deals with a recession.  In a recession, a firm has less cash to operate with and less profits, which reduces both the real worth of the firm and its liquidity.  Therefore, to assume less risk, a firm will decrease output.  Conversely, if firms want to remain at the original output level before recession, firms will be forced to borrow because of their decreased net worth.  This means that with less cash on hand, they will be forced to borrow; in other words, firms will assume more debt.  This all translates into a higher probability that this won’t be paid back and the firm will go into bankruptcy.  Therefore, during recessions, supply curves are shifted to the left and output is reduced to compensate for lower real net worth and less liquidity so that a firm doesn’t take on more risk.  The two authors also mentioned that investments will be volatile in the construction market.  This is because that particular market is made up of numerous small firms, many of whom don’t have easy access to the equity market, and therefore, rely heavily on financing their endeavors through debt instruments.  The two authors also point to one more example in which a decrease in net exports decrease that exporter’s net worth.  This will lead to a decrease in demand of inputs, which will drive down the prices of those inputs in other markets.  A “spillover” effect will be recognized from firm to firm and from market to market.  It’s a result of this “spillover” phenomenon that micro-level industries cannot be aggregated to come up with the macro picture.  Rather, these spillovers, as seen from this simple example, compounds and amplifies as it moves from one firm to another and from one market to another.

The second basic ingredient mentioned for price rigidities is credit markets and risk averse banks.  Unlike the goods market, which operates in an auction market where the good is sold to the highest bidder, the credit market doesn’t function in this manner.  Due to risk averse credit institutions who are worried about loans not being repaid, they will not sell to the highest bidder.  Rather, they will use a technique called credit rationing in which “interest rates are chosen to maximize the expected utility of the lender.”  The absence of an auction is observed because of risk averse banks.  Like firms, banks, too, are risk averse and need to be even more so in today’s age with the subprime mortgage meltdown.  Instead of screening customers to see whether they had a high probability of repaying the loan, they seemed to violate the Greenwald-Stiglitz argument by entering into an auction and selling loans to whomever wanted one.  As with firms’ behaviors, banks will respond in similar fashion with a recession.  As the economy worsens, banks perceptions of the relative riskiness of loans will increase.  Since bad economic times equals a higher rate of defaulted loans, banks will experience a decrease in their net worth as debt instruments are being “sold,” but not repaid.  Resulting from these hardships, banks will also engage in portfolio management by shifting their composition to less risky assets (i.e. Treasury bills).  According to Greenwald and Stiglitz, therefore, equilibrium can only be reached at a higher interest rate, which would discourage investment.  However, this isn’t the observed behavior because new Keynesians feel that price rigidities are in place to reduce the magnitude of fluctuations in the market as well as to make customers content.  Therefore, banks will not raise interest rates, which will not discourage investment.  All of this aggregated will lead to the banks assuming greater risks.  Resulting from all of this, the Federal Reserve can be effective in a few ways–changing the reserve requirements and the discount window–rather than the accustomed lowering of the federal funds rate.  (Lowering of the federal funds rate may not decrease the supply of loans enough to make the banks more “sound.”  Using the other two monetary tools will make the bank’s net worth increase because it can borrow from the Fed at a cheaper rate.)

The third ingredient is the labor market.  Old Keynesian economics referred to the unemployment phenomenon, but didn’t discuss the role of the labor market.  New Keynesians suggest an alternative to new classical economists by claiming that even though the employee will work for the going market wage rate, he can’t find work, and as a result, there is the phenomenon known as involuntary unemployment.  This can be caused by efficiency wages, insider-outsider theory, imperfect competition, and implicit contracts.  The efficiency wage says that higher real wages will lead to higher productivity because of the attraction of higher quality labor.  The insider-outsider theory claims that “outside” workers won’t be hired for cheaper wages because the “insiders” are the ones responsible for training them.  Since labor is heterogeneous, the insiders and outsiders are of different quality because the insiders have been trained and the outsiders have not been trained.  Thus, there isn’t a perfect substitute.  As a result, insiders will not want to be replaced by cheaper, outside workers, and since the insiders control the training process, they will refuse to train outside workers for a lower real wage.  The third reason as to why there is a sticky wage has to do with imperfect competition.  The nature of imperfect competition means that each firm sets wages, prices, and employment levels.  As mentioned earlier, firms are risk averse and consequently, don’t know the outcomes on their activities and production with a lower real wage, which is why it won’t be decreased.  Lastly, implicit contracts has been echoed throughout the article.  In a nutshell, firms want to make their employees happy and content and in order to do so, they must provide an incentive for their employees to stay with the firms during “boom periods” when they could easily look for a better job elsewhere.

As learned in class, nominal price rigidities also exist due to the costs of “menu costs.”  That is, the costs may outweigh the benefits from changing prices (i.e. disseminating that information to consumers, physical costs of changing prices, etc.).  In many instances, rather, a firm will exhibit a flat-top profit maximizing curve in which several combinations of output with the constant price will produce very similar profits.  If this is the case, it won’t pay for a firm to change its prices at the risk of losing profits and because these firms have been proven to be risk averse and don’t want to disrupt the status quo of the economy.  Game theory also plays a part in rigidity of prices and wages under the new Keynesian model.  Since the money supply is not perfectly observed by all agents, not all agents will change their prices proportionally.  Because of this uncertainty as to how other economic agents will react to changes in the money supply, it would be sub-optimal to increase your own prices by the same increase in the money supply.  Therefore, it is observed that no agent will increase prices, at least not as much as the increase in the money supply.

 Greenwald and Stiglitz end their discussion by looking at the RBCs and the new classicals.  The two noted that the RBCs focused on economic volatility and said that it resulted from external and unforeseen technology shocks.  According to a new Keynesian, however, how would you explain a recession?  Was there a negative technology shock?  New classicals feel that imperfect information is the reason why there are deviations around potential output and full employment.  While imperfect information and subsequent changes in the demand and supply curves matter (i.e. resulting from a shock), it isn’t the principal reason.  What confused me when looking at the critique of the new classical model was the fact that it didn’t explain in lament terms what was lacking and what was “improved upon” with new Keynesian economics.  I think what Greenwald and Stiglitz hope to have the reader to come away with is that imperfections exist in the macroeconomy and that these imperfections can amplify at the macro-level, which will lead to deviations from potential output and abnormally higher levels of unemployment.

Source: Greenwald, Bruce, and Joseph Stiglitz.  1993.  New and old Keynesians.  Journal of Economic Perspectives 7 no. 1 (Winter): 23-44.  (This can be found in the class Reader.)