Macroeconomic stabilization through an employer of last resort.

Author:Fullwiler, Scott T.
Position:Report
 
FREE EXCERPT

The employer of last resort (ELR) policy proposal, also referred to as the job guarantee or public sector employment, is promoted by its supporters as an alternative to unemployment as the primary means of currency stability (see Forstater 1998; Mitchell 1998; Wray 1998; 2000; Mitchell and Wray 2004; Tcherneva and Wray 2005; and numerous publications available at the Center for Full Employment and Price Stability and the Centre of Full Employment and Equity). The core of the ELR proposal is that a job would be provided to all who wanted one at a decent, fixed wage; the quantity of workers employed in the program would be allowed to rise and fall counter to the economy's cycles as some of the workers moved from public to private sector work or vice versa depending on the state of the economy. Supporters have played an important advisory role in Argentina's Jefes de Hogar (hereafter, Jefes) jobs program that has provided jobs to over two million citizens--or five percent of the population; though there are some important differences, the Jefes program has many similarities with the ELR proposal (Tcherneva and Wray 2005).

While ELK proponents argue the program would not necessarily generate budget deficits (Mitchell and Wray 2004), the program is based upon Abba Lerner's (1943) concept of functional finance in which it is the result of the government's spending and taxing policies in terms of their effects upon employment, inflation, and macroeconomic stability that matter (Nell and Forstater 2003). This is in contrast to the more widely promoted concept of "sound" finance, in which the presence of a fiscal deficit is itself considered undesirable. Rather than not being able to "afford" an ELR program, ELK proponents argue that societies would do better to consider whether they can "afford" involuntary unemployment. (1) The proposed ELK's approach of hiring "off the bottom" is argued to be a more direct means for eliminating excess, unused labor capacity than traditional "military Keynesianism" or primarily "pump-priming" fiscal policies, particularly given how the U.S. economy struggles to create jobs for the poor even during economic expansions (Pigeon and Wray 1998; 1999; Bell and Wray 2004). As Wray (2000, 5) notes, "[h]ow many missiles would the government have to order before a job trickles down to Harlem?" More traditional forms of fiscal stimulus or stabilization are still useful and complementary to an ELR program, though proponents argue that only the latter could ensure that enough jobs would be available at all times such that every person desiring a job would be offered one while also potentially adding to the national output.

Regarding macroeconomic stability, it is the fluctuating buffer stock of ELR workers and the fixed wage that are argued by proponents to be the key features that ensure the program's impact would be stabilizing. With an effectively functioning buffer stock, the argument goes, as the economy expands ELR spending will stop growing or even decline--countering the inflation pressures normally induced by expansion--as some ELR workers take jobs in the private sector. Regarding the fixed wage, traditional government expenditures effectively set a quantity and allow markets to set a price (as in contracting for weapons); in contrast, the ELR program allows markets to set the quantity as the government provides an infinitely elastic demand for labor, while the price (the ELR base wage) is set exogenously and is unaffected by market pressures. Together, proponents argue, the buffer stock of ELR workers and the fixed wage thereby encourage loose labor markets even at full employment. Aside from an initial increase as the program is being implemented (the size of which will depend upon the wage offered compared to the existing lowest wage and whether the program is made available to all workers), proponents suggest the program would not generate inflationary pressures and thus would promote both full employment and price stability.

The purpose of this paper is to quantitatively model the potential macroeconomic stabilization properties of an ELR program utilizing the Fairmodel (Fair 1994; 2004). The paper builds upon the earlier Fairmodel simulations of the ELR in Majewski and Nell (2000) and Fullwiler (2003; 2005). Here, a rather simple version of the ELR program is incorporated into the Fairmodel and simulated. The quantitative effects of the ELR program within the Fairmodel are measured via simulation within historical business cycles and in comparison to other policy rules for both fiscal and monetary policies through stochastic simulation.

The Fairmodel and Macroeconometric Simulation

The Fairmodel is a well-known, large macro-econometric model of the U.S. economy developed in the 1970s by Ray Fair. The model is dynamic, nonlinear, and simultaneous and it incorporates household, firm, financial, federal government, state and local government, and foreign sectors of the economy. The model combines 30 stochastic equations that are estimated using the two stage [east squares method with another 100 identity equations. National Income and Product Account (NIPA) and Flow of Funds data are completely integrated into the model within the identity equations; balance sheet and flow of funds constraints are thus fully accounted for. There are 130 endogenous variables and over 100 exogenous variables.

The overarching intellectual tradition of the Fairmodel is the Cowles Commission approach to econometric modeling, which is strongly empirical but nonetheless relies heavily on theory--and in the case of the Fairmodel, particularly on an acceptance of the possibility of market disequilibrium--in specifying the stochastic equations that are the model's core (see Fair 1994, chapter 1, for further discussion).

As a structural econometric model, the Fairmodel is admittedly subject to Robert Lucas's (1976) critique, which suggests that estimated coefficients from structural models may not be consistent across policy regimes. Fair answers that "the logic of the Lucas critique is certainly correct, but the key question for empirical work is the quantitative importance of this critique" (1994, 13). Alan Blinder similarly notes that

while Lucas's conceptual point is valuable and indubitably correct, so are the well-known points that heteroskedastic or serially correlated disturbances lead to inefficient estimates and that simultaneity leads to inconsistent estimates. But we also understand that small amounts of serial correlation lead to small inefficiencies and that minor simultaneity leads to only minor inconsistencies; so suspected violations of the Gauss-Markov theorem do not stop applied econometrics in its tracks. In the same spirit, the realization is now dawning that the Lucas critique need not be a showstopper. Indeed, evidence that it is typically important in applied work is lacking. (1989, 107) Fair and others have further pointed out that attempts to generate tests and reliable predictions from models based upon the "deep structural parameters"--such as in real business cycle models or models employing rational expectations--Lucas prefers, have not been overly successful:

When deep structural parameters have been estimated from the first order conditions, the results have not always been good even when judged by themselves. The results in Mankiw, Rotemberg, and Summers (1985) for the utility parameters are not supportive of the approach. In a completely different literature--the estimation of production smoothing equations--Krane and Braun (1989), whose study uses quite good data, report that their attempts to estimate first order conditions were unsuccessful. It may simply not be sensible to use aggregate data to estimate utility function parameters and the like. (Fair 1994, 15) On the other hand, Fair (1994; 2004) argues that the economic significance of the Lucas critique can be tested, and thus models that suffer in important ways from the critique can be "weeded out."

For its part, the design of the Fairmodel's stochastic equations--following the Cowles Commission approach--is more driven by data and statistical testing than so-called "modern" macroeconomic models that are based upon rational expectations and calibration. For each of the model's stochastic equations, Fair has performed statistical tests of dynamic specification, spurious correlation, serial correlation of errors, rational expectations, structural stability where the date of potential structural change is not known a priori, end-of-sample structural stability, and over-identifying restrictions. Consequently--and importantly--the Fairmodel's basic structure has changed surprisingly little over time and according to statistical tests demonstrates structural stability across several business cycles and policy regime changes. Indeed, 29 of the 30 stochastic equations passed structural stability tests even in the face of the so-called "new economy boom" during the mid-to-late 1990s; since stability was rejected only for the stock valuation equation, these results led Fair to predict early on that the so-called "new economy" was driven primarily by a stock market bubble as none of the other equations provided rationale for the increased equity values (see Fair 2000b; 2004, chapter 6). His related estimates of accelerations in trend and cyclical productivity during the 1990s from the model (reported in Fair 2004, chapter 6) were nearly identical to those reported in the influential study by Robert Gordon (2003). Furthermore, Fair (2004) shows that wealth effects (chapter 5) and interest rate effects of monetary policy (chapter 11) in the model are consistent with generally accepted empirical evidence; his research also suggests that assumed asymptotic distributions of the model's dynamics (as in modeling multiplier effects of policy actions, for example) are reasonable and not biased according to bootstrapping...

To continue reading

FREE SIGN UP