A Response to Mankiw’s “The Macroeconomist as Scientist and Engineer”

April 10th, 2008

Mankiw begins his endeavor by relating the field of economics to that of a science, whether it’s a social science or a hard science is up for debate. Nevertheless, he feels that it should be termed a science so that the undergraduates starting out in this discipline don’t mistake it for a lot of haphazard guessing of what is the world and how the world should be based on policy decisions. It’s a science for one simple reason: “economists formulate theories with mathematical precision, collect huge data sets on individual and aggregate behavior, and exploit the most sophisticated statistical techniques to reach empirical judgments that are free of bias and ideology.” Economics is also considered a type of engineering because economics was developed to also solve practical problems. Mankiw sets up his paper to trace the history of macroeconomics and evaluate what we have learned. His goal throughout this paper is to show that macroeconomics was created out of two mindsets–those who view macroeconomics as a science (understanding how the world works) and those who feel it is a type of engineering (an application/tool to solve problems). He concludes the introduction by claiming that macroeconomics started out as an engineering discipline where people attempted to solve problems and only in the last several decades did it become a science where theories and tools were developed, with little or no practical application (though others would beg to differ I’m sure).

Macroeconomics first appears in literature during the 1940s due to the Keynesian Revolution. Many nobel laureates (i.e. Solow, Klein, Modigliani, Samuelson, and Tobin are specifically named in the paper) point to Keynes’ General Theory as their starting point in the field of macroeconomics. This influential book probably would not have been published had it not been for the Great Depression because “there is nothing like a crisis to focus the mind.” Keynes’ General Theory left a lot of questions unanswered, especially the question as to what model tied together all of his thoughts. This spurred others to continue in this field of economics, with early attempts left up to Hicks and Modigliani to develop a more clear model of the macroeconomy using the IS-LM curve. Though critics say it’s too simplified for the macroeconomy, the whole concept of the IS-LM model was to simplify a “line of argument that was otherwise hard to follow.” In that respect, the IS-LM model did its job. It’s just not the entire story. By the 1960s, there were many complex simultaneous equation models that were developed in order to forecast and evaluate the effectiveness of policy, with the framework still being used by the Federal Reserve’s U.S. model to this day. As Mankiw points out, the science of economics became engineering of economics starting in the 1940s when the theorists behind macroeconomics wanted to put their ideas to use and worked as advisors to the presidents to formulate policy.

Classical undertones stemmed from Keynesian economics with the onset of monetarism and new classical economics. Monetarists attacked the Keynesian consumption function because Milton Friedman theorized that the marginal propensity to consume would produce much smaller multiplier effects throughout the economy than Keynes’ model predicts. Mankiw does make it a point that though Friedman’s idea regarding the transparent and easy-to-understand rules by the Federal Reserve doesn’t have a strong following, it was a precursor to other countries’ central banks as they have established bands around which inflation rates can move. During the late 1960s, the development of the Phillips curve was able to better complete the Keynesian model that lacked a lot of theory. Though Keynes knew there was a relationship between unemployment and inflation, not much was able to be said about this topic. Friedman, however, recognized that there remains a difference between short-run and long-run tradeoffs. In the short run, the Phillips curve relationship holds water because inflation may be unexpected or unanticipated and therefore, unemployment will be able to decrease. However, in the long run, this relationship isn’t a strong one because of expectations, which was a huge step forward in the field of macroeconomic theory. Rational expectations, especially the Lucas Critique, was able to be strengthened from Friedman’s introduction of expectations. The Lucas Critique says that “Keynesian models were useless for policy analysis because they failed to take expectations seriously.” Lucas continues his argument in which he claims that the economy has rational economic agents who have imperfect information. Markets will clear, but monetary policy may get in their way because all monetary policy does is confuse people about the difference between absolute prices (nominal) and relative prices (real). Real business cycle theorists were the third wave of new classical economists to branch off of the Keynesian Revolution. Like the rational expectation theorists, RBC economists assumed markets clear instantly, but where they differed is that they looked at monetary policy as ineffective, and thus, it wasn’t in their business cycle model. Rather, business cycles were traced out due to random shocks of technology and the resultant intertemporal choice between work and leisure–the determinant of unemployment.

New Keynesians came onto the scene and looked to the issue of microfoundations because all microeconomics courses taught their students that firms and households look to maximize and promote efficiency in market clearing. However, these new Keynesians realized that there is a time element involved in market clearing. Not to say that markets won’t clear, but it won’t be instantly as assumed by earlier new classical economists. Rather, markets won’t clear in time period t because of sticky wages and sticky prices, especially seen in the labor market where wages adjust sluggishly over time as a result of labor contracts. New Keynesians were “divided” into two early waves, those that looked to understand allocation of resources when markets don’t clear in one time period and those that looked at rational expectations and market clearing. These first two waves failed to come up with conclusions as to why sticky prices and wages hinder markets from clearing instantly. Thus, a third wave of new Keynesians came onto the scene and answered these earlier questions by saying firms face menu costs when changing their prices, pay their workers efficiency wages above the market level to increase productivity, and that decision making deviates from rational thinking. Mankiw evaluates macroeconomic theory up to this point and argues that as a science, macroeconomics was successful. So, how was it as a engineering discipline in evaluating policy? He suggests that the answer is much less positive.

Long-run growth, rather than short-run fluctuations, became a hot topic in the 1980s and early 1990s because of three things: there was an ever-increasing gap between rich and poor countries, cross-country data became more readily available, and the U.S. economy in the 1990s was experiencing insurmountable growth. The clash and sort-of-schism between new classicals and new Keynesians was getting larger, but as these older economists were retiring, newer and more civil economists were stepping on the scene, which looked to “improve” the image of macroeconomics. One way in evaluating the degree to which economics is an engineering discipline is to look at Laurence Meyer’s A Term at the Fed. As a professor, Meyer served one term as the governor of the Federal Reserve. The book was a way for people to see the approaches taken in analyzing the economy. The bottom line is that work by new classicals, new Keynesians, and others have had “close to zero impact on practical policymaking.” Who is to thank for this? For one thing, there seems to be a confusion in the field of macroeconomics. While the Federal Reserve is independent, it doesn’t create policy rules, as was proposed by Friedman so people have the expectation of a the rate of money supply. Mankiw views inflation targeting, which is a policy rule implemented by the European Central Bank (ECB), as a way to “communicate with the public” rather than a rule that was introduced out of macroeconomic theory. What about low rates of inflation? Countries that have imposed inflation bands by their central banks and those who have not (i.e. United States) have both experienced low levels of inflation for a long range of time. The answer for this could be one of two things. Either supply shocks aren’t prevalent like they were during the oil crisis of the 1970s or because central banks have realized that high levels of inflation, as experienced in the 1970s, should be avoided at all costs because it is detrimental to the economy.

The other side of the coin is looking at the effectiveness that macroeconomic theory has had on the practical applications of fiscal policy. Bush’s tax cuts aimed at consumption rather than income is consistent with literature in public finance, especially with Atkinson and Stiglitz of the 1970s. The short-run analysis of tax policy is consistent with Keynesian economics because less taxes means more disposable personal income, which will lead to higher demand of goods and services. In conclusion, Mankiw views economics as more of a science than as an engineering tool. The reason is not because the Federal Reserve and government ignore the new ideas and theories developing in the field of macroeconomics. Rather, “modern macroeconomics research is not widely used in practical policymaking [because there is] little use for this purpose.” Undergraduates, though, are more like the engineer than the scientist. Except for the few who want to pursue economics in the academia (i.e. science), the majority of undergraduate students want to see how macroeconomic tools can be applied to the real world for effective policymaking (i.e. engineer).

Source: Mankiw, N. Gregory. 2006. The macroeconomist as scientist and engineer. Harvard University (May): 1-26, http://www.economics.harvard.edu/faculty/mankiw/files/Macroeconomist_as_Scientist.pdf (accessed April 10, 2008).

A Response to Hoover’s “Is Macroeconomics for Real?”

April 8th, 2008

Kevin Hoover’s basis for writing this piece “Is Macroeconomics for Real” comes from comments that have been written anonymously on his class evaluations.  Many of the students side with the commonplace among economists that macroeconomics isn’t “real” because it cannot stand alone.  Rather, it is based off of microfoundations, something that has been seen time and time again through many readings.  Oftentimes, older macro theories are revoked because they don’t incorporate microfoundations, such as utility maximization functions and other maximizing behaviors.  Hoover, however, argues that macroeconomics is a “stand alone” discipline that cannot be reduced to a microeconomics form.  Hoover starts with basic definitions for macroeconomics and microeconomics, defining micro to be the “economics of individual economic actions” and macro to be “the economics of broad aggregates.”  Hoover continues, though, by saying that Keynes didn’t define these two terms as was just mentioned above.  Though he didn’t use the terms macro and micro when making the distinction, to him [Keynes], microeconomics was the “theory of the individual industry or firm” and macroeconomics was the “theory of output and employment as a whole.”  Macroeconomics has expanded beyond Keynes’ definition, but the aggregates that he referenced still refer to GDP, unemployment, interest rates, the flow of financial resources, etc.

I will attempt to answer Hoover’s question by summing up his claims, even with his philosophical undertones.  He references Uskali Maki (1994) to define and answer the “real” in the title of his article.  Maki looks at ontological and semantic realism and says the difference lies with the fact that ontological realism is “what there is” and semantic realism looks at the connection between “language and what there is.”  Remember, Hoover’s claim is to see if macroeconomics can remain independent from microfoundations.  Through his investigation and questioning, he determines that macroenomic aggregates “exist externally”–that is, they don’t rely on microfoundations, which is a huge shakeup from mainstream economic thinking since the 1940s.  Lionel Robbins (1935) makes a blanket statement that “economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.”  From this statement alone, economics is about the individual, a microeconomic slant.  Keynes, however, developed the modern theory of macroeconomics with three main equations: the consumption function, which looked at aggregate consumption patterns in relationship to national income; investment on the basis of interest rates; and the liquidity preference.  Thus, going solely off of Robbins’ definition, Keynes’ contributions would be invalid because it doesn’t base its models off of the behaviors of the individual.

Hoover continues by looking at Mark Blaug’s (1992) individualism, which makes the claim that explanations of “social, political, or economic phenomena” cannot be explained without the understanding of the decisions of the individual.  Even Augustine Cournot of the 19th century says that “there are too many individuals and too many goods to be handled by direct modeling.”  Blaug, nevertheless, goes on to observe that “few explanations of macroeconomic phenomena have been successfully reduced to their microfoundations.”  Robert Lucas (1987) is a strong supporter of the individualism principle and the idea of microfoundations.  His colleagues and he have worked extensively on new classical economics that assumes that representative economic agents (the individual) make decisions to reach their optimal choices.  Essentially, macro theory must use these fundamental microeconomic elements (i.e. utility maximization, consumption maximization, etc.) in order to have any validity.  A. P. Kirman (1992) criticizes the idea of a “representative agent” because it fails to represent actual individuals.  Individuals inherently seek to maximize their utility and consumption, but without rationally modelling and plotting out their optimal points along a budget constraint.  David Levy (1985) has the same logic because information isn’t perfect.  As previous blogs have alluded to and directly mentioned, assumptions for some of these macro theories, though built upon microfoundations, doesn’t hold water because their assumptions are too “naive” and simplistic.  Though it helps with the model, what good is a model that doesn’t accurately capture actual observed behavior?

Hoover’s next set of arguments is based around the “validity” of the macroeconomic aggregates.  Nobody doubts that GDP, unemployment, and interest rates are interconnected.  People do disagree, however, that these aggregates are the “fundamental units” that constructs this economic reality.  Hayek (1979) says it best that these entitites are secondary because these entities cannot be explained and fully understood without having an understanding of the individual components.  This statement reverberates through all the macro theory that presented criticisms towards other theories for failing to be based upon microfoundations.  Nevertheless, even Hayek doesn’t believe in the pure definition of individualism, citing the Cournot problem.  (CAN SOMEBODY PLEASE TELL ME WHAT IS THE COURNOT PROBLEM?  I think it has to do with being unable to model an entire economy because it’s too complex, but I’m not sure.)

Referring back to the aggregates that consumes macroeconomics, Hoover states that there are two aggregates: natural and synthetic.  Natural aggregates are those that are simple sums or averages, such as total employment or an average interest rate on commercial paper/Treasury security for a certain period of time.  Hoover says that he terms them natural because they are calculated in the same units in which the individual units are also calculated.  The other aggregate is synthetic.  Synthetic aggregates are those that are “fabricated out of components” and therefore have a different structure.  The main example here is the aggregate/general price level.  An average of all prices will not work because apples and oranges cannot be added together.  The ultimate goal is to find out the price of money to see what something is worth in real terms.  Again, this is difficult to accomplish because the overall economy is complex and there would have to be thousands of equations to capture all of the movements in the economy, which is next to impossible and very, very time consuming.  The story goes on to discuss the indexes that have to be constructed.  Indexes give insight into general price levels because once again, percent changes of certain goods and services will weigh more heavily on the overall economy and “price of money” than other goods and services.  For instance, Hoover says a change in the price of gasoline will have a larger impact than the change in the price of caviar.  Thus, indexes reflect weights that have to be applied to certain industries and sectors of the economy.  (As a side note, PPIs and CPIs are calculated with and without food and energy because these two areas of the economy are the most volatile and will have a large impact on what is the perceived rate of inflation.)  This same thought process holds true for the need to calculate real GDP.  Price changes are bound to occur.  Therefore, nominal GDP will always increase, even if quantity does not change.  Therefore, real GDP is needed to see whether prices changed and the level of output did not change, or whether the economy experienced an increase in output due to more efficient methods.  If the latter is true, then real GDP will go up.  If the former is true, then it can be expected that only nominal GDP will increase due to the rise in prices.

The following section discusses supervenience, which I don’t understand.  I will quote the passage, but cannot provide insight only because it does not make sense to me.  On page 12, Hoover says “Macroeconomic aggregates I believe supervene upon microeconomic reality.  What this means is that even though macroeconomics cannot be reduced to microeconomics, if two parallel worlds possessed exactly the same configuration of microeconomic or individual economic elements, they would also possess exactly the same configuration of macroeconomic elements.”  The reverse doesn’t necessarily hold true.  On a different note, Hoover discusses irreducible aggregates and their ability to be manipulated.  Some macroeconomic aggregates cannot only be controlled, but “can be used to manipulate other macroeconomic aggregates (i.e. real interest rates and price levels and their effect on yield curves).  He ends the paper by stating that this paper only attempted to show the current behavior and interplay between macroeconomics and microeconomics.  Hoover does go on to mention that there are macroeconomic aggregates that are irreducible, and consequently, cannot be built upon microfoundations.  Therefore, these entities are indeed “real.” 

Source: Hoover, Kevin D.  1999.  Is macroeconomics for real?  University of California-Davis (June): 1-22, http://users.umw.edu/~sgreenla/e488/Macreal.htm (accessed April 8, 2008).

A Response to Wynne’s “Sticky Prices: What is the Evidence?”

April 3rd, 2008

Mark Wynne attempts to look at the evidence behind changes of the stock of money and whether this has implications for the overall economy (i.e. employment, growth rates) in the short run.  This has obvious implications for effectiveness of monetary policy because through open market operations, the Federal Reserve controls the money supply.  For over two hundred years, this issue has been “debated” and the only conclusions that seem to be able to be drawn from this is that “prices are ‘sticky’ at nonmarket-clearing levels.”  This will directly effect the real factors of the economy.  Suppose that people were magically inundated with more money than they had before.  Also, suppose that this increase in the money supply was a one-time, unexpected policy.  Since it can be assumed that each person was holding the optimal amount of cash previous to the increase in the money supply, this excess cash would be spent on stuff.  However, if everybody spent their excess cash holdings to return to their optimal holding of cash (which existed before the increase in the money supply), nothing will motivate the producers to put out more output.  Thus, the long-run result would be an increase to the price level by the same proportion that the money supply was increased.  New Keynesians, however, are interested in the “transition stage” that occurs between the event that got the economy out of disequilibrium until the time that the economy has restored equilibrium.  This transition stage, according to Wynne, could see one of two scenarios–either an instantaneous increase in the price level, which would end the story, or a rigidity of prices.  The rigidity of prices is the more interesting situation.  If some producers are slow to raising their prices, due to the menu costs and other situations that were discussed in class (even though nominal demand has increased with this excess amount of money in the economy), then output in the short run may increase without an increase to prices.  This increased output would show up as a real increase in the short run until all firms had a chance to raise prices equal to the growth rate of the initial increase in the money supply.  Wynne mentions in the introduction that his article will focus on sticky prices rather than wages because many analysts view the failure of wages to adjust to changes in the economy as wage stickiness.  Rather, this “rigidity” is due to labor contracts that are spelled out, which requires the wage to be paid out in installments in the form of paychecks.  Therefore, it is because of this locked-in labor contract that wages don’t adjust as often as prices.

Wynne’s earlier study with Sigalla in 1993 concluded that raw data that are used to compile the producer price index and the consumer price index are often list prices instead of transaction prices.  There are two answers for this practice.  One is so firms protect themselves against potential antitrust litigation and the other reason is so these spelled out prices don’t fall into the hands of competitors.  In order to get around this dilemma, the BLS will take the average sticker price of various stores and the average discount or coupons associated with the purchase of this product.  This averaging of raw price data makes for a difficult time in assessing the flexibility of the prices.  Since some average prices fluctuate more than their “constituent price series,” this, too, will make for an unreliable estimate as to the flexibility of prices.  Wynne points to the earliest studies of the frequency of price changes, which was conducted by Mills (1927).  He developed a wholesale price index (WPI) in which he recorded 206 commodities.  The WPI ranged from 0 to 1 with an index value equal to zero if the price never changed over the period monitored and a value of one if the price changed every period recorded.  The shapes of these graphs were U-shaped, that is, there were a lot of commodities that didn’t exhibit price changes over the recorded time frame and a lot of commodities that exhibited price changes almost every period.  There were fewer commodities that fell in the middle range.  The products that exhibited the most price changes were farm products.  (An interesting note is that during WWI, the WPI graph didn’t show a U-shaped distribution, but rather an even distribution of commodities exhibiting ratios in the middle of the graph and a lot of commodities at the right-hand side of the graph.  The two criticisms of Mills’ work are that he used averages and used list prices rather than transaction prices.

Cecchetti’s (1986) study of magazines is a good example of price stickiness.  He was able to get away from the criticisms that plagued Mills (1927) because magazine prices are transaction costs and there are few discounts associated with magazines.  His sample period from 1953-1979 suggested high price stickiness because real costs were decreasing as nominal prices were increasing during high periods of inflation in the 1970s.  Therefore, he concluded that menu costs–that is, these fixed costs–were very high.  Nevertheless, his study had other shortcomings that Mill didn’t experience.  He looked at newsstand prices of magazines, but it is recognized that many people buy a subscription for a magazine, which is similar to the criticism of labor contracts.  That is, these individuals enter into a contract with the magazine company for a year and many times, subscriptions allow for customers to receive discounts.  This commonality, unfortunately, isn’t reflected in Cecchetti’s 1986 study.

Koelln and Rush (1993) look at whether controlling for quality of a product affects price rigidity, something for which Cecchetti couldn’t control.  Looking at magazine data from 1950-1989, the two conclude that Cecchetti’s study of price rigidity was overstated.  Looking at the number of pages of text, they were able to conclude that as inflation “erodes the real price of the magazine,” the number of pages of text will decline.  Therefore, this “price rigidity” can be confused with the declining quality of the product.  Carlton (1986) revisits Stigler and Kindahl (1970) in which the two looked at transaction costs rather than list prices of various industrial commodities.  Stigler and Kindahl (1970) collected data from buyers and not from sellers because buyers have less of an incentive to report list prices.  Thus, Carlton concludes that industrial commodities, especially industries dealing with steel, chemicals, and cement kept prices unchanged for a period of at least one year.  Other studies that have been conducted were those in the retail business.  Kashyap (1991) looked at retail catalogues and concluded that nominal prices remain unchanged for periods of at least one year and when prices do change, both the magnitude and number of changes is irregular.  Blinder (1991) conducted interviews with firms and found out that fifty-five percent of the firms interviewed claimed to have changed their prices no more than once a year, with only ten percent claiming to change prices monthly.  An interesting note from Blinder’s study is that three-fourths of the firms will change something other than price (i.e. delivery lags, quality of products) when demand is tight.

There have been some overall assessments of price stickiness.  Many studies mentioned by Wynne only have to do with a small fraction of the country’s GDP (i.e. magazines).  Other studies deal with intermediate products rather than finished products (i.e. industrial companies).  Lastly, of most importance is the price rigidity studies that actually deal with transactions involving money.  Since many products are bought via credit, it doesn’t represent the demand for money, and consequently, these studies will not determine whether money plays an important role in price rigidities.  Wynne also brings it to the attention of the reader that studies, such as Cecchetti and Stigler-Kindahl reaffirmed their theories of price rigidities rather than looking for price rigidities.  What I mean by this is that these two studies picked areas of the economy where it was already hinted at that prices were already inflexible and thus, the studies produced biased results that reaffirmed, rather than proved, that prices in these markets were rigid.  Carlton (1983) also criticizes the studies done on price rigidities.  For instance, it was known that there were price controls during WWII that held nominal prices at a constant level.  However, to get around this, the quality of the products being offered were decreased.  Thus, in a sense, the products were no longer homogeneous because of the varying qualities of the products being assessed.

Wynne concludes that there is little evidence to suggest that prices are sticky in the overall economy.  With all the thinking and assumptions of price stickiness, he was shocked that only three studies were able to be produced that showed actual price stickiness.  There are ways to deceive the idea of price stickiness by either withholding delivery during a heightened demand or by lowering the quality of the product.  In essence, just because markets take longer to clear than in a Walrasian auction, doesn’t mean that the evidence points to price rigidities.  To go back to the original question regarding the effectiveness of monetary policy and its effects on the real side of the economy–only a small degree of price rigidity needs to be in place for those external, monetary shocks to be able to trace out the observed business cycle.  Even if all prices were deemed flexible, monetary policy could still affect the real side of the economy–the shocks would simply then come from macro market failures or market incompleteness.

Source: Wynne, Mark A.  1995.  Sticky prices: What is the evidence?  Federal Reserve Bank of Dallas Economic Review (1st Quarter): 1-12.

A Response to Greenwald and Stiglitz’s “New and old Keynesians”

April 1st, 2008

 Greenwald and Stiglitz start off by making three claims upon which old and new Keynesians would agree–there will be an excess supply of labor for a given market wage, the aggregate level of output will fluctuate at a greater magnitude than what can be accounted for by short-run changes in technology, and money matters but monetary policy can and has been proven ineffective during certain periods of time (i.e. Great Depression).  Nevertheless, what is different from new classicals is the notion that government intervention via policy decisions can be effective some of the times.  From the start, the two authors make the comparison to new classical and RBC model theorists.  Those schools of thought conclude that all markets clear in one time period; there isn’t the presence of sticky prices or wages; unemployment is voluntary, which is shown by changes of supply and demand shifts in the labor market; and that there aren’t macro market failures, which allows for the efficient responses to changes in externalities (i.e. shocks).  As noted by Greenwald and Stiglitz, the only difference between new classicals and RBC theorists is the shocks that affect the aggregate output of the economy.  For the new classicals, it is shocks to the money supply whereas the RBC focus on technology shocks.  Nevertheless, the two schools of thought, though basing their macroeconomic models on microeconomic foundations or “microfoundations,” assume that firms interact in a perfectly competitive market, there is perfect information, there are no transaction costs, and there is no risk assumed by economic agents since all individuals are homogeneous.  Greenwald and Stiglitz end their introduction with a few questions that look at the “validity” of these earlier macro models.  Some things that cannot be answered by new classicals or RBCs are why there are variations in the number of work hours, why do some industries see higher rates of layoffs, and why are investment and inventories in certain industries so volatile.

The article’s jumping off point is dealing with price rigidities–both nominal and real.  The background behind these observed rigidities in the market is due to the fact that it has been observed that markets don’t clear in one time period.  If they did, this would imply that prices and wages were flexible and resulting from this would be the notion that whenever the market encountered a shock it would adjust instantaneously and maintain full employment and output at its potential.  However, this isn’t the case, which is why there is the discussion regarding these inflexibilities.  According to the authors, the markets benefit from having rigid prices and wages because it lessens the volatility and magnitude of the fluctuations in the economy.  To explain the rigidities of these prices and wages observed in the market, Greenwald and Stiglitz introduce three basic ingredients that are all found in markets that have imperfect information.  These three ingredients are as follows: risk averse firms, a credit allocation mechanism in which risk-averse banks play a central role, and new labor market theories that include “efficiency wages and insider-outsider models.”

Risk averse firms have two options: either issue equity or debt instruments.  There is much less risk with issuing equity because these firms share the risk with those who provide the finance.  Issuing debt, on the other hand, means that the firm issuing this has an obligation to repay and thus can face the risk of going bankrupt.  So it seems obvious that firms would issue equity, but there is a negative side, as pointed out by the authors.  They view the issuance of equity as negative because the market perceives it this way.  The market’s opinion is that the “worst firms” will be the ones most likely to issue equity because those firms may be overvalued and are trying to sell additional shares.  Why are firms risk averse?  The answer is that managers control firms in that manner.  This is because those individuals are more aware of the status quo and less able to predict what will happen if the firms changes its action (which is termed “instrument uncertainty”).  Just as described with modern portfolio theory, firms assess various portfolios to assume the least amount of risk for a given return, or vice verse.  If prices change, so, too, will the actions of the firms and their resultant portfolios–either by changing the price it charges or the quantity it produces to keep customers content.  The example given as to why firms are risk averse deals with a recession.  In a recession, a firm has less cash to operate with and less profits, which reduces both the real worth of the firm and its liquidity.  Therefore, to assume less risk, a firm will decrease output.  Conversely, if firms want to remain at the original output level before recession, firms will be forced to borrow because of their decreased net worth.  This means that with less cash on hand, they will be forced to borrow; in other words, firms will assume more debt.  This all translates into a higher probability that this won’t be paid back and the firm will go into bankruptcy.  Therefore, during recessions, supply curves are shifted to the left and output is reduced to compensate for lower real net worth and less liquidity so that a firm doesn’t take on more risk.  The two authors also mentioned that investments will be volatile in the construction market.  This is because that particular market is made up of numerous small firms, many of whom don’t have easy access to the equity market, and therefore, rely heavily on financing their endeavors through debt instruments.  The two authors also point to one more example in which a decrease in net exports decrease that exporter’s net worth.  This will lead to a decrease in demand of inputs, which will drive down the prices of those inputs in other markets.  A “spillover” effect will be recognized from firm to firm and from market to market.  It’s a result of this “spillover” phenomenon that micro-level industries cannot be aggregated to come up with the macro picture.  Rather, these spillovers, as seen from this simple example, compounds and amplifies as it moves from one firm to another and from one market to another.

The second basic ingredient mentioned for price rigidities is credit markets and risk averse banks.  Unlike the goods market, which operates in an auction market where the good is sold to the highest bidder, the credit market doesn’t function in this manner.  Due to risk averse credit institutions who are worried about loans not being repaid, they will not sell to the highest bidder.  Rather, they will use a technique called credit rationing in which “interest rates are chosen to maximize the expected utility of the lender.”  The absence of an auction is observed because of risk averse banks.  Like firms, banks, too, are risk averse and need to be even more so in today’s age with the subprime mortgage meltdown.  Instead of screening customers to see whether they had a high probability of repaying the loan, they seemed to violate the Greenwald-Stiglitz argument by entering into an auction and selling loans to whomever wanted one.  As with firms’ behaviors, banks will respond in similar fashion with a recession.  As the economy worsens, banks perceptions of the relative riskiness of loans will increase.  Since bad economic times equals a higher rate of defaulted loans, banks will experience a decrease in their net worth as debt instruments are being “sold,” but not repaid.  Resulting from these hardships, banks will also engage in portfolio management by shifting their composition to less risky assets (i.e. Treasury bills).  According to Greenwald and Stiglitz, therefore, equilibrium can only be reached at a higher interest rate, which would discourage investment.  However, this isn’t the observed behavior because new Keynesians feel that price rigidities are in place to reduce the magnitude of fluctuations in the market as well as to make customers content.  Therefore, banks will not raise interest rates, which will not discourage investment.  All of this aggregated will lead to the banks assuming greater risks.  Resulting from all of this, the Federal Reserve can be effective in a few ways–changing the reserve requirements and the discount window–rather than the accustomed lowering of the federal funds rate.  (Lowering of the federal funds rate may not decrease the supply of loans enough to make the banks more “sound.”  Using the other two monetary tools will make the bank’s net worth increase because it can borrow from the Fed at a cheaper rate.)

The third ingredient is the labor market.  Old Keynesian economics referred to the unemployment phenomenon, but didn’t discuss the role of the labor market.  New Keynesians suggest an alternative to new classical economists by claiming that even though the employee will work for the going market wage rate, he can’t find work, and as a result, there is the phenomenon known as involuntary unemployment.  This can be caused by efficiency wages, insider-outsider theory, imperfect competition, and implicit contracts.  The efficiency wage says that higher real wages will lead to higher productivity because of the attraction of higher quality labor.  The insider-outsider theory claims that “outside” workers won’t be hired for cheaper wages because the “insiders” are the ones responsible for training them.  Since labor is heterogeneous, the insiders and outsiders are of different quality because the insiders have been trained and the outsiders have not been trained.  Thus, there isn’t a perfect substitute.  As a result, insiders will not want to be replaced by cheaper, outside workers, and since the insiders control the training process, they will refuse to train outside workers for a lower real wage.  The third reason as to why there is a sticky wage has to do with imperfect competition.  The nature of imperfect competition means that each firm sets wages, prices, and employment levels.  As mentioned earlier, firms are risk averse and consequently, don’t know the outcomes on their activities and production with a lower real wage, which is why it won’t be decreased.  Lastly, implicit contracts has been echoed throughout the article.  In a nutshell, firms want to make their employees happy and content and in order to do so, they must provide an incentive for their employees to stay with the firms during “boom periods” when they could easily look for a better job elsewhere.

As learned in class, nominal price rigidities also exist due to the costs of “menu costs.”  That is, the costs may outweigh the benefits from changing prices (i.e. disseminating that information to consumers, physical costs of changing prices, etc.).  In many instances, rather, a firm will exhibit a flat-top profit maximizing curve in which several combinations of output with the constant price will produce very similar profits.  If this is the case, it won’t pay for a firm to change its prices at the risk of losing profits and because these firms have been proven to be risk averse and don’t want to disrupt the status quo of the economy.  Game theory also plays a part in rigidity of prices and wages under the new Keynesian model.  Since the money supply is not perfectly observed by all agents, not all agents will change their prices proportionally.  Because of this uncertainty as to how other economic agents will react to changes in the money supply, it would be sub-optimal to increase your own prices by the same increase in the money supply.  Therefore, it is observed that no agent will increase prices, at least not as much as the increase in the money supply.

 Greenwald and Stiglitz end their discussion by looking at the RBCs and the new classicals.  The two noted that the RBCs focused on economic volatility and said that it resulted from external and unforeseen technology shocks.  According to a new Keynesian, however, how would you explain a recession?  Was there a negative technology shock?  New classicals feel that imperfect information is the reason why there are deviations around potential output and full employment.  While imperfect information and subsequent changes in the demand and supply curves matter (i.e. resulting from a shock), it isn’t the principal reason.  What confused me when looking at the critique of the new classical model was the fact that it didn’t explain in lament terms what was lacking and what was “improved upon” with new Keynesian economics.  I think what Greenwald and Stiglitz hope to have the reader to come away with is that imperfections exist in the macroeconomy and that these imperfections can amplify at the macro-level, which will lead to deviations from potential output and abnormally higher levels of unemployment.

Source: Greenwald, Bruce, and Joseph Stiglitz.  1993.  New and old Keynesians.  Journal of Economic Perspectives 7 no. 1 (Winter): 23-44.  (This can be found in the class Reader.)

A Response to Mankiw’s “Symposium on Keynesian Economics Today”

March 30th, 2008

Keynesian economics assumed that all micreconomic data/markets could be aggregated to come up with a macro-level picture of the economy.  However, the Keynesians were criticized for this very notion, that is, these economists were criticized for not building their theory of aggregate demand off of microfoundations.  As Mankiw stated in the “Symposium,” the Phillips curve phenomenon seemed to disappear with the evidence of stagflation in the 1970s.  As we learned in class early on, Keynesian theory focused on shifts in aggregate demand–either during a recession or a growth period, but never both parts of the business cycle.  That is where the RBC models were able to better explain the paths of the economy.  Though unable to best predict the path, through parameterization, RBC economists were able to constantly tweak the model to come up with the path of the economy where it’s achieving its optimal point.

Nevertheless, the 1970s and early 1980s is when supply-side economics became big with President Reagan behind it.  As we learned in class, the only reason why Reagonomics became prevalent and in the forefront was because it offered something different.  Though Reaganomics and RBC models were at the forefront, few people who became New Keynesians were able to accept the new classical assumptions that firms operate in a perfectly competitive market and all markets clear in time period t.  Rather, firms aren’t assumed to be perfectly competitive and not all markets will clear in one time period.  James Tobin, who argued against markets clearing in one time period, said the reason was because of macro market failures.  This is furthered by the example that the goods market may be in equilibrium, but not the labor market.  Therefore, New Keynesians couldn’t build off the assumptions of Keynesians–that macro is simply aggregated micro–because you really can’t aggregate microeconomic markets since not all markets are clearing in one time period as assumed by the new classicals.  This suggests that there are unforeseen market forces (i.e. externalities) that are multiplied, rather than simply arithmetically added through industries, which is why micro cannot be simply added up.  Mankiw points out that these market failures are felt to the extreme when the economy is going through recessions and depressions.  The 1970s, as mentioned earlier, brought about high levels of unemployment.  The new classicals viewed this unemployment as voluntary whereas the New Keynesians did not.

Bringing this blog back to the beginning statements, New Keynesians had to build a model that was developed from microfoundations–that is, the goods market, labor market, and capital market.  The whole point of a firm is to maximize profit and utility.  From this, they were able to make claims as to why markets didn’t clear in one time period.  Disequilbrium occurs because of sticky prices and wages.  David Romer discusses this price rigidity in which he says that because firms are “imperfectly competitive” they face small barriers that have large macroeconomic effects (once again, this is referring to the unforseen externalities and the spillover effect from one industry into another industry).  James Tobin, however, feels that the New Keynesians aren’t “asking the right questions” and feels that the role of price rigidities have been exaggerated.  As learned in class, though, the business cycle is the reason why we experience sticky prices, all of which come from micreconomic elements.

Mankiw’s “Symposium” ends with some rhetorical questions as to what the New Keynesian line of thinking will do for the field of macroeconomic theory.  Will it be long and arduous or will it be the theory from which other models will build?  Obviously, Mankiw didn’t have the answers, but I still am unclear as to what the differences are between nominal and real rigidity.  It was discussed in the “Symposium” and I was looking through my notes.  I have that nominal rigidity means that nominal prices are sticky (i.e. labor contracts).  Thus, even if we go into a recession and the prices change, the labor contract holds true.  Therefore, the nominal price (i.e. what is spelled out on the contract) will remain unchanged.  However, then my notes discuss real rigidity, which is looking at the rigidity of relative prices.  How does the example of a monopsony condition with one employer fit with relative rigidity?  I understand that nominal prices are increasing at the same rate as the price level, but how does that fit with the issue of rigidity?

Source: Mankiw, N. Gregory.  1993.  Symposium on Keynesian economics today.  The Journal of Economic Perspectives 7, no. 1 (Winter): 3-4.

A Response to Hoover’s “The New Classical Macroeconomics”

March 20th, 2008

Chapter 5: The New Monetary Economics

New classicals have brought about criticisms to monetarism, Keynesian monetary theory, and earlier new classical work for not starting with microfoundations.  Prior to Keynes’ General Theory was written in 1936, the distinction between microeconomics and macroeconomics was unknown.  Rather, there was two economic theories that were prevalent during that time–monetary theory (general price levels) and value theory (relative price levels).  A year earlier than the General Theory, John Hicks discussed monetary theory, which he claimed had its roots in value theory.  The difficult discussion dealt with explaining why people held non-interest bearing money when interest-bearing money opportunities were available.  He concluded that these forms (i.e. non-interest) were held to overcome what he termed as “frictions,” or transaction costs and risk.  The early stages of the quantity theory of money is derived from the Walrasian system where all markets clear because price levels adjust to put supply and demand into equilibrium.  It was eventually discovered that price levels (absolute or general) were ultimately determined by the quantity of money in circulation so as long as the velocity of money in circulation was corrected.  From this stemmed discussions of inflation.  Monetarists, such as Milton Friedman, suggested that in the long run, the growth rate of the money supply only affects the general price level instead of the real output of the economy.  Nevertheless, the inflation phenomenon is one in which there is more money supplied than there is money demanded.

Patinkin took the quantity theory of money a step further by starting to explore real price levels. He noticed that economic activity could be carried on at any absolute price level, but money has a value.  Therefore, its value depends on the aggregate price level, and as a result, Patinkin divides through by the price level to discover the real purchasing power of money and incorporates that data into utility functions.  This allows for the fusion of monetary and value theory because the demand for money is defined to be the same as the demand for any other good because it holds a certain utility value.  In addition, the level of absolute prices is determined in conjunction with all relative prices so as long as there is an “anchor” or standard to set those prices (i.e. gold).  However, the principal criticism for Patinkin’s theory is the absence of a distribution effect.  A distribution effect assumes that if an economic agent had an increase in cash balance then he would increase his supply and demand proportionately and the relative proportions remain unchanged.  This also assumes that every economic agent is alike, which means that it wouldn’t matter who received the increased cash balance.

Gurley and Shaw (1960) look at the finance process.  They define money with two out of the three characteristics as did Patinkin–that is, money is the medium of transactions and its demand is assumed to “arise from uncertainty about the timing of receipts and payments.”  The new definition associated with money under the Gurley-Shaw theory is that money can be termed debt.  Therefore, one person’s assets exactly equals another’s debt.  The two would then cancel out.  Looking at money as a means of exchange, it is useful in transactions given the uncertainty.  This uncertainty is what causes the differences in rates of return of portfolios.  Fama’s paper is important because he claims money exists only because of “government-imposed legal restrictions on other financial assets.”  However, I don’t understand his argument with regard to being able to get rid of fiat money if the government introduces it into the economy and the story regarding ingots and government taxation on spaceships (pages 95-97).

Chapter 5 ends with a discussion of banking and finance and how it relates to the Modigliani-Miller theorem.  According to Hoover, the theorem says that “how a firm finances its real activities has no decisions of other economic agents.”  Fama assumes a few things.  One, there are perfect capital markets (i.e. “no taxes, transactions costs, or danger of bankruptcy).  There is also assumed to be rational expectations.  Third, economic agents are concerned with risk-return ratios as they pertain to changes in wealth.  Fourth, firms ‘investment decisions are made independently of the how the investment is financed.  Lastly, economic agents have the same access to capital markets–that is, if a firm can issue a liability, so, too, can an individual agent.  Debt-equity ratios can be altered by a firm, for example, which will change the real opportunities of return by economic agents.  However, to return the economy to the previous state prior to changes in a firm’s finances, economic agents will have to modify their portfolio composition.  Fama concludes his argument by stating that relative prices are independent of any financial portfolios–that is, relative prices are derived from fiat money or commodities and absolute prices (with inflation built in) are independent of financial assets.  The conflicting idea that I don’t comprehend is that a sophisticated and more developed financial system relies on the presence of money because a financial asset is essentially a claim on something else and the conversion of one asset into another form is done through the transaction of money.  The liquidity of money is of utmost importance because of the “lack of necessary connections between the amount of outstanding claims to goods of conversion.”  That is, money tends to become the good in which “accounts are settled and into which financial assets are ultimately convertible.”  Not following the conclusion, Hoover asserts that, in fact, relative prices are not independent of finance.

Source: Hoover, Kevin D.  1988.  The new classical macroeconomics.  Cambridge, Massachusetts: Basil Blackwell.

A Response to Summers’ “Some skeptical observations on real business cycle theory”

March 13th, 2008

Lawrence Summers starts out by comparing the foundations of Keynesian macroeconomic theory to that of astrological science.  That is, both are “premised on the relevance of variables that are in fact irrelevant.”  According to real business cycle economists, Keynesian economics didn’t explain the macro economy because it wasn’t based on microfoundations.  As we have learned in class, RBC models are based off of utility maximization and profit maximization principles, which evolve at the microeconomic level.  Summers brings Prescott’s “Theory Ahead of Business Cycle Measurement” into his critique of RBC models.  Prescott’s article is essentially asserting the claim that the theory cannot be currently tested because there does not yet exist measurement tools capable of testing the theory.  Nevertheless, Summers critiques Prescott’s article on four levels–the parameter estimation, the shocks present in the model, price levels, and exchange failures.  Summers goes on to say that throughout history theories have been developed that seemed plausible (or at least a good starting point) because they “mimic” or approximate reality well enough for that period in time.  However, as measurement tools improve and people are made more aware of their surroundings, theories will change (i.e. the Earth was at one point considered the center).  This is what this critique attempts to accomplish–that is, to determine whether or not Prescott’s theory mimics the economy in its current state coincidentally or if it captures the observed business cycle trends.

With respect to the parameter estimates, Summer can find no evidence to support Prescott’s claim that the one-third of all household time is devoted to market activities.  Most other studies, such as Martin Eichenbaum, Lars Hansen, and Kenneth Singleton (1986) have estimated that to be only one-sixth since 1956.  In addition, Prescott’s model assumes the average real interest rate to be four percent.  Over his thirty year study, however, the real interest rate only averaged out to one percent.  Summers’ last critique with regards to parameter estimates is Prescott’s inability to display evidence that supports his claim on the elasticity measurements of labor supply.  According to Summers’ reading of many studies, labor supply is only minimally effected by changes to the real wage.

Like many RBC models, the observed cyclical behavior is caused by external/exogenous shocks, known as technological shocks.  However, the critique makes the claim that he doesn’t have nay evidence to support the business cycle movements.  Even the oil shocks of the early 1970s haven’t contributed to “large movements in measured total factor productivity.”  In small sectors of the economy, however, such as in the mining and construction sectors, negative productivity growth has been observed.  In addition, technology shocks may not be as large as originally thought.  Studies have shown, especially in Jon Fay and James Medoff (1985) that the reason for cyclical behavior is due to firms holding more labor than necessary during troughs.  Known as “labor hoarding,” firms may hold labor in excess of regular production requirements during recessions because it is deemed more costly than the wage rate to hire or fire employees.  Therefore, when the economy is experiencing a peak, the labor force is being fully utilized and shown as productive.  However, when a recession occurs, those excess laborers are kept on rather than fired, which makes it appear as each labor unit is now less productive and factor productivity will decrease.  While Summers doesn’t think of “labor hoarding” as a technological shock, many RBC economists do think of any deviation from the long-run potential output as a technological shock.

The third argument that Summers makes is the absence of price data in his model.  I don’t understand this paragraph because I do not comprehend what Summers means when he argues that Prescott’s model was tested without price data.  To me, it is unfathomable as to how any economics can be empirically tested without paying attention to price.  I will have to do some exploration as to what a “price-free economic analysis” means in the eyes of Summers.  The last objection to the Prescott model is the inattention to exchange breakdowns.  From the critique, it is mentioned that studies that analyzed the Great Depression made it clear that firms had output to sell and workers wanted to exhange their labor for those products, but there was a breakdown in the exchange mechanism of labor for products and the exchange never transpired.  This “breakdown” caused U.S. GNP to decline fifty percent over the years 1929 to 1933.  The best explanation for these exchange mechanism failures is due to the credit market breakdown during the same time period.

Summers sums up the critique to say that economists will continue to be better at explaining behaviors of individual economic agents than explaining the equilibrium in markets when many economic agents interact.  With that being said, Summers stresses that the importance of being able to explain why exchange markets breakdown, and if that can be accomplished, then these macroeconomic models will be able to help forecast economic fluctuations.  What I was hoping for the critique to address was the parameterization issue that was discussed in class on 3/12.  If models are constantly changing to fit the data, how can we be sure that it’s the best model out there and not one that simply “mimics” the available data–which is something that Summers starts out discussing?

Source: Summers, Lawrence H.  1986.  Some skeptical observations on real business cycle theory.  Federal Reserve Bank of Minneapolis Quarterly Review (Fall): 23-27.  (This can be found in the Reader.)

A Response to Stockman’s “Real Business Cycle Theory: A Guide, an Evaluation, and New Directions”

February 28th, 2008

Alan Stockman comes right out and states the purpose of real business cycle (RBC) models–that is, these models are used to “explain aggregate fluctuations in business cycles without reference to monetary policy.”  In fact, he goes on to make four assertions as to why the real business cycle model is important.  From evidence gathered, monetary policy does not affect the real output as much as economists once believed.  Second, even if it does affect the real output of the economy, it isn’t the driving force behind the business cycle, supply shocks and non-monetary occurrences usually influence the aggregate fluctuations to a greater degree than monetary policy, and lastly, real business cycle models can be used to determine how disturbances affect different sectors of the economy.  These real business cycle models incorporate the following: GDP, consumption, fixed assets (i.e. investment, nonresidential, structures, equipment), average nonfarm employment, and capital stocks.  Stockman notes that others have used variations of the real business cycle model to include cross-sector analyses to track output and production across the aggregate economy in order to trace the disturbances from one sector to another.

Stockman starts the discussion on real business cycle (RBC) models with two assumptions–people maximize their utility with different combinations of leisure and consumption and there’s a technology coefficient that allows for the transformation of capital and labor into output that can be either consumed by households or reinvested back into capital stocks for period t+1.  Stockman traces some early, prototype models that had their roots with RBCs.  One of the first was Kydland and Prescott, who used backwards induction to come up with an abstract model that looked at utilization of capital, lagged effects of leisure on utility, and imperfect information about productivity.  Hansen furthered the Kydland-Prescott 1982 model by adding a “lottery on employment,” in which people are assumed to either work full time or not at all, without looking at part-time work.  This lottery of employment assumes that people that choose to work and those who choose not to work are randomly selected.  The Greenwood et al. model looks at current and future investment.  It is noted that the model shows increases in consumption, labor supply, output, and investment are due to current economic conditions.  The model discusses technology shocks as one factor, but that would only increase future output resulting from increases in future capital.  Current output, however, is affected by current economic conditions.  Kydland-Prescott’s 1988 model incorporates that the cost of a greater utilization of capital is a greater utilization of labor.  This variable work week (a longer work week) predicts the variability in the U.S. inventories.  Parkin (1988) tried to calculate the parameters in the Cobb-Douglas production function using labor data from the GDP.  He determined that these parameters varied over time and as a result, was able to calculate the technology shock.  Because his model showed the share of leisure s(preference shock) relatively stable over time, this implied that preference shocks can be viewed as unimportant to RBC models.  The last model mentioned is that of Christiano and Eichenbaum (1988), which looked at government shocks as affecting shifts in the labor supply curve.  Coupled with technology shifts, these two movements could induce changes to the real wage.

Stockman starts his discussion on RBC models by listing parameters often included in these models.  They include the fraction of total time spent working and the time spent in leisure, the psychological factor, the rate of capital depreciation, the marginal rate of substitution in consumption, the marginal rate of substitution in leisure, labor as a percentage of GDP, and the variance of productivity shocks.  However, the one common criticism is how to use the RBC models to explain periods in which real output falls using the logic that negative technology shocks exist.  (This is most commonly observed in smaller sectors of the economy.)  What Robert Hall (1988) deems the most important is the ability to differentiate between temporary reductions (aggregate output) from permanent reductions (measured output).  However, I don’t understand the difference bewteen the above logic dealing with negative technological shocks and the difference between measured output and total output (page 32).

An interesting note when discussing the criticisms, Stockman observes that econometric tests may reject the RBC model.  This is because the models may be wrong due to large influences in measurement errors.  Even if they prove to be wrong, RBC models have been shown to give better advice for policymakers than other incorrect theories.  Another criticism is that involuntary unemployment isn’t explained by RBC models.  The example looks at two individuals–both of whom have the same tastes and preferences and characteristics, but the only difference is that one is employed and one is unemployed.  However, the model will not differentiate between the two; the only explanation is that unemployment is the result of a random fluctuation in productivity within the economy.  Going back to the technology criticism, Stockman proposes the question that technology shocks may be more influential to a group of industries rather than the nation as a whole.  Thus, this brings into question that though technology shocks are important, nation-specific disturbances play at least as large of a role in output fluctuations as these technology shocks.

To see if the RBC model explains the random fluctuations of the international economies, there are two “tests.”  One is to see if the goodness of fit improves with the addition of more variables, but this requires the addition of more equations and more parameters.  The other is to test the RBC model with the same criteria and parameters to a different set of macroeconomic facts.  These RBC models should work fairly well because exchange rates look at the relationship between currencies (relative and nominal) due to “real shocks,” which are what RBCs are trying to model and explain.  However, the one obstacle that will be encountered is that various economies have different parameters and disturbances.

The ultimate question is whether these real business cycle models should be used for optimal policy decisions.  If the assumptions of RBC models are true, then monetary policy will have no effects on the real output of the economy.  To achieve optimal policy responses, as was suggested in our reading of Kydland and Prescott, there can be no fiscal and monetary interventions because policies are made based on expectations and conditions in time t.  The policy is implemented assuming the status quo, but once the policy is implemented, in time t+1, t+2, … , t+n, expectations will change and the policy will not achieve the optimal response.  Another thing to consider, which goes along with the Kydland-Prescott model is that the responses of the economy (to changes in regimes/administrations) bring about these suboptimal conditions because of market failures.  Stockman ends the discussion by stating that fluctuations in the economy are most likely optimal resopnses to uncertainty, rather than the failure of markets to clear, which was the general thinking of Keynesian economists and monetarists.  Essentially, the RBC should be concerned with long-run rates of technological change and low inflation instead of large fluctuations in GDP because those fluctuations are random and shouldn’t be “massaged” by either monetary or fiscal intervention.

Source: Stockman, Alan C.  1988.  Real business cycle: A guide, an evaluation, and new directions.  Economic Review 24 no. 4 (Quarter 4): 24-43.

A Response to Hoover’s The New Classical Macroeconomics

February 19th, 2008

In Kevin Hoover’s introduction, he discusses as to why Keynesian economics began to fade away by the 1970s.  The “Keynesian dominance of macroeconomics” ended because of the “absence of microfoundations for macroeconomics.”  Hoover cites two prime examples where microfoundations are incorporated into aggregate economic relationships–the life-cycle model and permanent-income hypothesis.  Don Patinkin sought to prove these two theories using the value theory of money and rational expectations and decided that Keynesian economics was the “economics of disequilibrium” because when labor markets are in equilibrium, there is no involuntary unemployment.  In continuing reading, I couldn’t quite understand the microfoundations that Keynes had left out of his model nor why his models dealing with the labor market were considered in disequilibrium.  Hoover continues the macroeconomic story by mentioning the fact that both Milton Friedman and Edmund Phelps recognized that there is no long-run trade off between unemployment and inflation.  Only in the short run, however, can the two be traded off so as long as people mistake absolute prices (nominal) for higher relative prices (real).  Thus, it is important to note that both Friedman and Phelps recognized that expectations plays a large role in determining how high inflation can be to lower the unemployment rate because once people don’t confuse the real and nominal prices, the two rates will both increase and there will no longer be a trade off.  It was at this point that John Muth’s idea of rational expectations started to confound policymakers and thus macroeconomic policy was proven to be ineffective during the 1970s when the country faced both high inflation rates and high unemployment rates.  This idea of stagflation and incorporation of rational expectations is what Kevin Hoover cites as the second factor that led to the downfall of Keynesian economics.

The birth of new classical macroeconomics became prominent in the 1980s when economists used rational expectations in their models because they believed that “macroeconomic models are legitimate only if they possess market-clearing microfoundations grounded in individual rationality.”  The first question that needs to be addressed is the difference between classicals and Keynesians.  In class, we did look at this comparison, but I will reiterate it in the blog once more.  Classicals view the aggregate supply curve as vertical.  As the price rises, the real wage will fall and employers will want to hire more labor, but workers won’t work for lower real wages and therefore, the labor supply market is no longer in equilibrium (demand > supply) and the only way to return to equilibrium is to raise nominal wages by as much as the increase in the price level.  At this point, the old real wage is reached again and full employment is at the same level as before the rise in prices.  Therefore, inflation doesn’t affect GDP potential.  According to Hoover, the Keynesians viewed the aggregate supply curve as a J-curve because Keynes couldn’t explain it, but still made the assumption that there was some involuntary unemployment in the labor market.  As a result, employers could hire those individuals at lower real wages, which would put the employment level at a level greater than what it previously was, and consequently, push GDP potential to the right.  However, even Keynes recognized that the real wage couldn’t drop below it’s market-clearing level (where supply = demand) because workers wouldn’t accept those low real wages.  In this section of Chapter 1, Hoover makes one last comparison between classicals and Keynesians in that he said that classicals believed that the monetary side (i.e. changes in the money supply) only affected nominal output whereas Keynesians believed that the monetary side could have real effects on GDP.  Afterwards, he comments that the neoclassical synthesis between classicals and Keynesians was in describing the vertical section of Keynes’ J-curve because at that point, the economy is in equilibrium.  (It is here where Hoover makes it known that their biggest accomplishment was in developing the Phillips curve.)

 The quantity theory of money was “kept alive” by the monetarists who also believed that long-run markets cleared, which is modeled by a vertical aggregate supply curve.  The biggest distinction for defining a monetarist is someone who ascribes to the notion that inflation is a monetary phenomenon–that is, if the money supply is increased by x percent, then the price level will increase by x percent and be returned to the GDP potential.  (In the short run, however, Hoover notes that an increase in aggregate demand will increase actual GDP, but at a higher price.  When people’s expectations come in to focus with the higher prices, the aggregate supply curve will shift up and GDP will return to potential, but at even higher prices than before.

As new classical economics came into being, it was first considered “radical monetarism” because it built in expectations into the Phillips curve model.  Essentially, the model of aggregate demand-aggregate supply (long- and short-run) looks identical except for the fact that the aggregate supply curves have been replaced by “virtual aggregate supply” curves that reflect “money illusion” (i.e. real prices rather than absolute prices should matter to them) that is, once people’s expectations account for the random errors, they will move off of the curve.  Thus, if the shift in aggregate demand is expected, the price level will increase but the GDP potential will remain unchanged.  However, if the change is unanticipated, then GDP actual will move past GDP potential at a higher price, but when people realize the higher prices, GDP actual will move back to GDP potential at the cost of even higher prices.  Essentially, new classicals view the aggregate curves as nothing more than “crude devices that do not reveal the underlying behavior of optimizing individuals.”  According to the classicals, these graphical relationships shouldn’t be the basis for any economic analysis.

Hoover lists three tenets for new classicals–(1) savings, consumption, or investment are based on real, not nominal factors; (2) agents seek to maximize but are constrained by the limits of their information; and (3) agents make decisions with rational expectations.  The author also stresses the role of rational expectations.  As in class, the book, too, discusses two forms of rational expectations.  Either people do the best they can with the information they have (weak form of rational expectations) or people construct a model of the world and use that model to form their expectations (strong version of rational expectations).  Hoover extends the idea of rational expectations to Milton Friedman’s natural rate of unemployment hypothesis (1968).  Here, Hoover validates Friedman’s hypothesis by stating that there will be short-run deviations from the natural rate only because people mistake changes in their nominal wages for changes in their real wages.

Lucas and Rapping (1969) modified Friedman’s natural rate hypothesis and can probably be the first paper to deserve the title of “new classical.”  The two suggest that Friedman assumed labor supply to be elastic–that is, as wage rates increased, the labor supply would increase indefinitely.  However, this is not fully true because labor supply is dependent on population constraints and demographic changes and as a result, the long-run labor supply is inelastic with respect to the real wage.  Friedman generated an unemployment model based on people’s adaptive expectations of wages in time period t-1 and t, but Lucas and Rapping didn’t think adaptive expectations explained the fluctuations in unemployment.  The two felt that people’s expectations always lagged behind the real wage in time t because of unanticipated increases in the level of inflation, and consequently, the laborers would consistently think that their real wage was higher than normal, but eventually there would be decreases back to the original levels of employment.  This lagged expectation is why they didn’t think that expectations fit into the model of the natural rate.  Nevertheless, the one point that Lucas stresses is that agents act rationally, but will still make mistakes (random) that can be large or small.  The key to rational expectations, though, is to develop a rational expectation model that minimizes those mistakes so he can discern what was due to real price changes versus inflationary pressures.

Source: Hoover, Kevin D.  1988.  The new classical macroeconomics.  Cambridge, Massachusetts: Basil Blackwell.

A Response to Bosworth’s Tax Incentives and Economic Growth

February 15th, 2008

Bosworth notes that supply-side economics is used in two different instances.  One is in the broad sense where the “volume and quality of the capital and labor inputs” determine the aggregate supply and the other is a narrower focus that emphasizes that tax reductions “increase the supply of saving, investment, and labor.”  Supply-side economics received a lot of attention in the late 1960s and 1970s due to high inflation rates.  The classicals didn’t view demand management as a problem and thought that prices would adjust as a result of disequilibriums in the market, especially in the labor market, and as a result of the quick changes in prices, the market would clear.  Keynesian economists came along next and observed that a decrease in demand in one market would lead to unemployment.  This unemployment would cause reductions in other markets because their incomes would fall and thus, they would consume less.  Therefore, where the Keynesians and classicals differed was in the speed of prices adjusting to changes in the market.  Whereas the classicals thought that price levels adjusted quickly, Keynesians observed sticky prices, and consequently, it was those sticky prices that created a gap between supply and demand.  Up until the 1960s, it was a general consensus that government policy did little in the way of changing the short-run path of potential GDP.  (For individual markets, the potential GDP can be changed because resources can be shifted from one market where the returns aren’t maximized to the market where the resources are used to their fullest potential.)  However, the overall economy is different in that supply is limited by the growth rate in both labor and capital.  Thus, since demand management was seemed as ineffective in combating high rates of inflation and unemployment in the late 1960s and 1970s, supply-side economics became the focus.  The new economic thought, however, had its own problems–political in nature–because supply-side economics deals with changes in taxes.

At first, though, people weren’t too receptive to the idea of supply-side economics because they felt that the government went too far in redistributing taxes and thus eroding people’s incentive to work.  Bosworth does note here that neo-Keynesians cannot be blamed for all these problems centered around inflation because demand management had greatly increased during the Vietnam War and the disruption of oil from the Middle East in the early 1970s created domestic economic problems.  The neo-Keynesians did, however, try to curb the recession in the 1970s by inducing mild recessions, but at the expense of higher unemployment.  Therefore, due to the fact that the public was dissatisfied with the roller coaster ride of the inflation-unemployment trade off, new theories to guide macroeconomic policies emerged.

It was an observed trend of the 1970s that the overall economy and worker productivity both slowed down (and between 1977-1982, worker productivity failed to grow at all).  At first this was perceived as temporary, but it soon became realized that this may be the new status of the economy–since people were used to an average of 3 percent growth since the end of WWII.  The frustrations came when workers perceived that their real wages were falling due to the high inflation rates.  More importantly, the notice of the lack of productivity growth in the economy made people realize that there would be few extra resources to put towards other social programs, and consequently, this laid the groundwork for increased social conflict among the various races and ethnic groups in America.

Out of the foreseeable social conflict came the notion that greater emphasis needed to be placed with the supply side of the economy.  At the current time, there was little or no consensus as to the “sensitivity of wages and prices to changes in demand and the responsiveness of supply to changes in wages, prices, taxes, and government benefits.”  Economists agree that “fixprice” markets represent the short run where prices are rigid and don’t respond to changes in the money supply due to lagged effects whereas “flexprice” markets represent the long run where prices are positively related to changes in the money supply.  (The only grey area, however, is the definition of the short- and long-run.)  In answer to the second question, people haven’t been able to quantify the magnitudes of the effects of changes in supply due to changes in the rates of return or relative prices when deciding on saving and investment choices.

Supply-side economists concur with the American neoclassicalists that supply (i.e. capital and labor) is largely effected by changes in the price level, which can be viewed as tax reductions because then people have more take-home income.  What supply-side economists feel is that more money in the economy via tax reductions will create an increase to entrepreneurial innovation, and thus, a greater work effort.  (This differs from earlier thoughts that concluded that tax reductions would ultimately reduce work effort because someone would wonder why they would have to work as hard as they did before if they end up with the same after-tax income as a result of the new tax break.)  Supply-side economists criticized Keynesian economics on the grounds that Keynesians believed in the “involuntary unemployment” due to mismatches the aggregate supply and demand.  However, the supply-side economists observed that the only reason to “involuntary unemployment” was as a result of information lags and errors in forecasting models developed by firms.  Some even went as far to state that fiscal policy has little or no effect on aggregate demand if the public wasn’t affected at all by interest rates.  Thus, these individuals favored monetary policy to affect the rates of change of GDP.

All in all, the supply-side view credits the increase in both capital and labor to the 1964 tax reduction because without this increase in after-tax incomes, there wouldn’t have been increased spending, which would never have led to increases in “production, employment, investment, and income.”  The rise in after-tax incomes had a significant impact on worker productivity.  With a lower tax bracket, employees offered to work more hours in return for a higher after-tax wage.  Where the criticism comes into play with the supply-side view is the magnitude and absolute effects that these supply changes (i.e. capital and labor) really had on productivity and output.  Were people responding strictly to economic incentives (i.e. lower tax rates) or did the improvements in technology have a larger impact?  Of course, there are numerous factors at play that cannot be controlled outside of a laboratory setting.  On the one hand, the substitution effect would say that people would work more hours at the expense of leisure due to higher after-tax incomes, but others would cite the income effect and claim that people would want more leisure, and therefore, give up work hours.  The impact on savings is another area of question.  Some would argue that the increased after-tax income would encourage people to save more, but there are those who feel that people consume based on present and future streams of income, and as a result, would consume more at the present time, which indicates that the net change in savings is ambiguous.  The effects on capital investments are also uncertain because an increased after-tax income would make capital cheaper relative to labor, but “cheaper” capital wouldn’t alter the production process that measurable because firms make decisions based on combinations of both capital and labor.

Source: Bosworth, Barry P.  1984.  Tax Incentives and Economic Growth.  Washington DC: The Brookings Institution.